Abstract
Introduction
Because we believe the journal selection before a manuscript submission deserves further investigation in each medical specialty, we aimed to evaluate the predictive ability of seven bibliometrics in the Radiology, Nuclear Medicine and Medical Imaging category of the Web of Knowledge to calculate total citations over a 7-year period.
Methods
A linear mixed effects design using random slopes and intercepts were performed on bibliometrics corresponding to 124 journals from 2007 to 2011, with their corresponding citations from 2009 to 2013, which appeared in the Journal Citations Report Science Edition.
Results
The Eigenfactor Score, Article Influence Score, Cited Half-life, 5-years impact factor and Number of Articles are significant predictors of 2-year-ahead total citations (p ≤ 0.010 for all variables). The impact factor and Immediacy Index are not significant predictors. There was a significant global effect size (R2 = 0.934; p < 0.001), which yielded a total variance of 93.4%.
Conclusions
Our findings support researchers’ decision to stop the misuse of IF alone to evaluate journals. Radiologists and other researchers should review journal’s bibliometrics for their decision-making during the manuscript submission phase. A re-ranking of journals using Eigenfactor Score, Article Influence Score, and Cited Half-life provides a better assessment of their significance and importance in particular disciplines.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The Impact Factor (IF) reflects the relative importance of a journal within its field and quantifies the frequency with which the “average article” in a journal has been cited in a particular period [1]. IF for a given indexed journal is calculated dividing the number of times the articles published in a journal are cited during the two previous years, by the number of articles published by that journal in the same interval [2]. Although IF has become the leader metric for consideration of tenure and promotion, and for budget and resource planning in most of the universities, research institutions, and colleges [3], contradictory opinions are stating that the IF is not a perfect metric and has severe limitations [4,5,6,7].
Recently, the American Society for Cell Biology, with journal editors, publishers, and other stakeholders, issued a pledge to move away from an over-reliance on journal impact factor and to seek new ways of assessing research output [8]; a similar opinion appeared in an editorial of BMC Medicine [9]. Furthermore, recent articles in journals of the Radiology, Nuclear Medicine and Medical Imaging category, written by editors and authors, commented on several aspects of the IF and its association with concurrent bibliometrics [1, 7, 10,11,12].
Alternative bibliometrics from the impact factor (IF), all of them reported annually by the Web of Knowledge managed by Thomson Reuters [13] have been claimed to improve the esteem of journals in specific categories [14, 15]. This bibliometrics includes 5-year Impact Factor, Immediacy Index, No. of Articles [published], Cited Half-life, Eigenfactor™ Score (ES), and Article Influence Score.
To further analyze the relationship between citations and bibliometrics of Radiology, Nuclear Medicine and Medical Imaging journals which are reported annually by the Web of Knowledge, we assessed the bibliometrics from these journals and calculated their annual predictive ability for total-cites calculation in a 7-year period. Our findings might help authors to understand which bibliometrics offer a better ranking of journals before submission.
Materials and methods
Study design
A retrospective study to evaluate the performance of journals in the Radiology, Nuclear Medicine and Medical Imaging category from the Web of Knowledge [13] recorded the values of eight selected bibliometrics listed in the Journal Citations Reports (JCR) [16] in a 7-year period. Definitions of each used bibliometric from the Web of Knowledge have been recently published [15].
Definitions of the alternative bibliometrics from the Web of Knowledge:
-
1.
The 5-years Impact Factor (5-yIF), it is the IF of a given publication, in a specific year, but calculated over a 5-year period.
-
2.
The cited half-life (CHL) is a measure of the rate of decline of the citation curve. That is, the number of years that the number of current citations takes to decline to 50% of its initial value. It measures how long articles continue to be cited after publication [17].
-
3.
The immediacy index (ImIn) depicts how often, on average, authors cite very recent articles from a particular journal; and hence, how rapidly the average paper from that journal is adopted into the literature [17].
-
4.
Eigenfactor™ Score (ES) is considered an indicator of the global influence or repercussion of manuscripts published online in Journal Citation Reports® (JCR) as part of the Web of KnowledgeSM. Its calculation is based on the number of times articles published in the past five years have been cited in the JCR annually. It also considers which journals have contributed to these citations, so that highly cited journals will influence the network more than lesser-cited journals; references from one article to another article from the same journal, are removed, so that Eigenfactor Scores are not biased by journal self-citation [18, 19].
-
5.
Article Influence™ Score (AIS) determines the average influence of articles over the first 5 years after publication. It is obtained from ES based on its same iterative algorithm, but taking into account the number of articles [20].
-
6.
Article number (AN) is the number of articles published during the selected year.
Journal Selection and measured periods
We chose the bibliometrics values of journals in the Radiology, Nuclear Medicine and Medical Imaging category of the JCR Science Edition. A total of 124 journals were selected, see Table 1. Journals that coincidentally appeared between the 2007 and 2013 in the JCR Science Editions were included. We assembled five sets of bibliometrics for each journal and for each selected year that matched to its total citations 2-year ahead:
-
Set 1: 2007 Bibliometrics vs. 2009 Total Cites.
-
Set 2: 2008 Bibliometrics vs. 2010 Total Cites.
-
Set 3: 2009 Bibliometrics vs. 2011 Total Cites.
-
Set 4: 2010 Bibliometrics vs. 2012 Total Cites.
-
Set 5: 2011 Bibliometrics vs. 2013 Total Cites.
A pdf file containing bibliometrics from all reported journal (2007–2013) is available as a supplementary file for online access.
Sample size calculation
We followed the recommendation by Tabachnick and Fidell [21] when performing repeated measures analysis: univariate F is robust to modest violations of normality as long as there are at least 20 degrees of freedom for error in a univariate ANOVA and the violations are not due to outliers. Even with different n and only a few dependent variables (DVs), a sample size of about 20 in the smallest cell should ensure “robustness”; we got a total of 124 measurements per bibliometric included in our assessment.
Statistical analysis
Design of a mixed effects model
We assembled a predictive model of 2-years ahead Total Cites by combining the overall effect of bibliometrics while taking into account within-journal citation in five repeated measures. A linear mixed effects model with random slopes and intercepts was used to test the hypothesis that alternative bibliometrics of the Web of Knowledge surpasses the IF as predictors of a total-cite calculation. In agreement with the hierarchical structure of data, we assembled a mixed-model comprised of a three-level hierarchy. Level 1, the selected bibliometrics (continuous variables) consecutively measured during 5 years; level 2, the year of citation (that is time, the repeated measures); and level, 3 the journal ID (levels 1 and 2 are nested within each journal). Figure 1 shows the diagram of our three-level hierarchical data structure that assembled the variables and effects in the model.
Evaluating the need for multilevel modeling
The intraclass correlation coefficient (ICC) calculated the ratio of variation across the year of citation and journals. When a high ICC value points to a difference in the mean selected-bibliometric levels across time (year-over-year data), a multilevel modeling is needed to separately estimate the selected-bibliometric variance that occurs both across journals and time of measurement [22].
Independent and dependent variables
A total of eight independent variables were included: seven bibliometrics (continuous variables): Impact Factor, 5-year Impact Factor, Immediacy Index, No. of Articles, Cited Half-life, Eigenfactor Score, Article Influence Score; and one categorical variable, Year of measurement (time-set of citations). The 2-years ahead Total Citations was the dependent variable.
Mixed-model effects analysis
Data were analyzed using maximum likelihood (ML) estimation; it is considered an appropriate approach for studying individual change, besides ML, focuses on the entire model (both fixed and random effects); for our data, it created a hierarchical model that nested repeated measures at six consecutive years within journals. To specify the within-individual error covariance structure that best fits the data and protects the precision of estimates for the appropriate model, we evaluated the most common covariance matrixes types reported in the literature (unstructured, scaled identity, compound symmetry, diagonal) [23]; then, each matrix type generated a different model. The fitness of the models was assessed with the 2-log likelihood (i.e., likelihood ratio test/deviance test), Akaike’s Information Criterion (AIC), and Bayesian Information Criterion (BIC).
Our model considered the fixed effects of the selected independent variables. We added a random effect for the repeated measures, which allowed us to resolve the non-independence of data by assuming a different “baseline” of the continuous independent variables values for each journal. This effect characterizes the characteristic variation due to individual differences; the model design expected multiple responses per journal, and these reactions would depend on each journal’s baseline level [24]. Graphical representation of the fitting of the data was performed using scatter plot of the predicted versus the observed values of Total Citations for the whole model, labeled by subgroups (year of citations). Linear regression (LR) analysis allowed the measurement of R2 and p values [25].
Measure of the effect size
We computed a pseudo-R2 as a measure of global effect size statistic, even though the response variable variance was partitioned into five levels in our model. As previously described, we use the predicted score for each participant in the sample, calculate the correlation between the observed and predicted scores, and squared that correlation [22]. The effect size assessment (proportion of the variance in the dependent variable that can be explained by the independent variables) was obtained using the R proposed by Cohen [26], where 0.1–0.29 = small effect, 0.30–0.49 = moderate effect, and ≥ 0.5 = large effect. All analyses were carried out using IBM’s SPSS software (version 22.0.0.0, IBM Corporation, Armonk, NY, USA). Statistical significance was indicated by p < 0.05 (two-tailed).
Evolution of IF and ES over time
We finish our analysis by plotting the evolution of the IF and ES (separately) in the Radiology, Nuclear Medicine and Medical Imaging category by sorting the journals in 5 groups based on the values of their IF [(level 1, 0–0.99); (level 2, 1.0–1.49); (level 3, 1.5–1.99); (level 4, 2.0–2.99); (level 5 ≥ 3.0)]; and ES [(level 1, 0–0.00120); (level 2, 0.00121–0.00285); (level 3, 0.00286–0.00565); (level 4, 0.00566–0.01200); (level 5 ≥ 0.01201)]. These values reflected the percentiles, 20, 40, 60, 80 and 100%, respectively. We plotted the selected bibliometric values from 2007 to 2013 and performed a split-plot factorial anova.
Displacement of the ranking place
To show a graphic representation of how alternative bibliometrics could reorder the top 25 journal initially ranked by the IF after the analysis, we present a simple graph line connecting the rankings of the 25’s top journals with their most significant predictive metrics.
Results
Multilevel modeling analyses
Intraclass correlation coefficients
We found significant ICC values in all selected bibliometrics: Impact Factor (ICC = 0.988, p < 0.001); 5-year Impact Factor (ICC = 0.993, p < 0.001); Immediacy Index (ICC = 0.930, p < 0.001); No. of Articles (ICC = 0.989, p < 0.001); Cited Half-life (ICC = 0.981, p < 0.001); Eigenfactor Score (ICC = 0.998, p < 0.001), and Article Influence Score (ICC = 0.993, p < 0.001). The ICC helped us to justify the need of multilevel modeling; presence of variations in the mean selected-bibliometric levels across time (year of citations) indicated the need to perform multilevel modeling to separately estimate the selected bibliometric variance that occurs both across journals and time.
Overall fit of models
We performed separate analyses using the most common covariance matrixes types; the unstructured matrix type represented the best model by depicting the smallest values in the information criteria table. This matrix type has been reported to offer the best fit and is most commonly found in longitudinal data as it is the most parsimonious, which requires no assumption in the error structure [27]. Table 2 shows the assessment of the overall fit of the multivariate models.
Significant predictors of total citations and beta coefficients of the regression model
All independent variables were included: Impact Factor, 5-year Impact Factor, Immediacy Index, No. of Articles, Cited Half-life, Eigenfactor Score, and Article Influence Score. For random effects, we used by-subject random slopes and intercepts for the effect of repeated measures (time) (Wald Z = 10.845, p < 0.001). There was a significant effect for five IVs in the model: 5-year Impact Factor, No. of Articles, Cited Half-life, Eigenfactor Score, and Article Influence Score (p ≤ 0.010 in all cases). The Impact Factor and Immediacy Index were not significant. Table 3 shows the main effects for the selected model.
The more significant coefficient corresponded to the ES; it depicted a positive, meaningful direction in the outcome. Table 4 shows the unstandardized beta coefficients for each variable and its CI. Figure 2 shows graph lines identifying each selected journal in the final model; the existence of random slopes and intercepts is evident.
Global effect
The global effect size (pseudo-R2 correlation between the observed and predicted scores) for the whole model depicted an R2 value = 0.934, p < 0.001; this value corresponds to a large effect size. Figure 3 illustrates the regression line between the observed and predicted values for total citations.
Evolution of IF and ES over time
The split-plot ANOVA showed no significant interaction between time and IF (p > 0.05), but a main effect of time (p < 0.001); there was neither an interaction between time and ES nor a main effect of time (p > 0.05 in both analyses). For those journals with an IF > 3.0, there was a continuous growing trend from 2007 to 2013; journals with an IF between 2 and 3 decrease in their value after 2010; and those below 2.0 IF remained stable after that date. Regarding ES, journals with values higher than 0.01201 showed a decreasing trend from 2007 to 2011 with a growing recovery continued until 2013. All journals with ES below 0.01200 depicted a very mild downward trend from 2007 to 2013. Figure 4a, b shows the IF and ES trends plotting the split-plot ANOVA.
Displacement of journals previously ranked by the IF
There was a significant re-ranking among the top 25’s journals initially listed by the IF (the year 2013). After we had reclassified the journals based on their ES, only 16 journals remained in the top 25, some were demoted, but the rest climbed higher; some examples include: JAAC-Cardiovas Imag moved from 1st to 20th, and Med Phys jumped from 25th to 6th position. When ranked using the CHL, only 3 of the original top 25 IF journals stayed within the top 25 places. Figure 5 depicts the ranking displacements based on IF, ES, and CHL.
Discussion
The ranking of journals by their IF has become a primary consideration when authors are deciding where to submit their papers. IF is misused as a proxy for the quality of individual articles [8]. Usually, researchers look for journals with the highest impact factor instead of journals with the best audience for their research [4].
The success of researchers is nowadays judged by the number of publications they have published in high IF journals [28]. The assessment of the scientific impact of journals evaluated by bibliometrics is a complex, multi-dimensional construct and therefore the use of a single bibliometric index is inappropriate to rank, evaluate, and value journals. Readers should look beyond the impact factor and assess scientific articles individually [29]. Preferably, the use of multiple metrics with complementary features provides a more comprehensive view of journals and their relative placements in their fields [30].
Our study shows additional evidence to numerous reports about the apparent limitations of the IF as a significant predictor of total citations [5, 31,32,33]. The use of mixed-models analyses allowed us to study intra- and interindividual differences in the curve parameters (slopes and intercepts) [23]; this assessment dismisses the assumption of homogeneity of regression slopes, cast aside the assumption of independence (between cases), and expect the presence of missing data [24]. Our results evince the ES as the bibliometric which captures best the prestige of a journal, this fact has been previously compared with the ability of the IF [34]. However, our data are limited to state if the ES can assess the actual dissemination of an article (i.e., its use, as well as the category of journals which include it in their reference lists) [20]. The ranking displacements using ES rejects a previous statement that IF and ES produce similar rank orders of medical journals [35]. We consider our finding that journals with ES values higher than 0.01201 showing a decreasing trend from 2007 to 2011 and then a growing recovery continued until 2013, depicts a real pattern in the data, as this was a subgroup analysis not observed with lower ES values.
Our findings agree with a similar study in the Gastroenterology and Hepatology category [14]. However, our global effect size (R2) was slightly less (0.934 vs. 0.999). To explain our findings, we must mention that although IF is a per-article measure, people use it to evaluate journals. On the other hand, ES is a per-journal measure that represents each journal size, based on its “total citations”; therefore it is considered superior for evaluating the quality of journals [34]. The non-significance of the IF is explained as it measures citations per article, then, is a poor indicator of total citations given that scholarly journals vary in size over multiple orders of magnitude.
The ES is gaining traction, because it focuses on the impact of particular articles, but dependence solely on citations still limits it. The rest of bibliometrics is less well-known as predictors of citations, for example, the number of articles at least scales with a journal size, but does not account at all for quality; this left the ES as a winner in an unbalanced competition [14]. We found a significant predictive ability of AIS, which has been reported with a positive correlation to the IF (r = 0.94), however, we did not find an analysis of AIS similar to our study [36].
Readers should be aware that even though the number of citations has been widely used as a metric to rank papers, recently some iterative processes are considering new approaches, such as the PageRank algorithm which has been applied to the citation networks [37]. Moreover, more modern usage-based article-level metrics are being also explored such as the Usage Factor, Publisher and Institutional Repository Usage Statistics (PIRUS2), and the Y-Factor [30].
Additional factors are worth mentioning: each medical specialty depicts different IF threshold, for example, a journal in the oncology field might have an IF up to 30 times as high as the corresponding figure in the forensic medicine category [3]. All journals have a diverse set of citations, and even the best publications contain some papers that are never cited [38]. That is, citations are not equally distributed, with fewer than 20% of the articles accounting for more than 50% of the total number of citations [39]. Despite these facts, the misuse of the IF for judging the value of science persists, because it confers significant benefits to individual scientists and journals [40].
Several limitations of this study need to be addressed: a detailed explanation of each bibliometric is beyond the scope of this article; our analysis was not a conventional regression model, but a linear mixed effects design. Part of the benefits of a study of this kind is that the assumption of independence (between cases) is cast aside, and correlation among variables in the model is expected [24]. Our predictive analysis was limited to time-sets in a 2-year comparison period (five sets of repeated measured bibliometrics from 2007 until 2013). This timespan explains that the first publications about ES methodology appeared in 2008 [19]. Also, the original idea for this project was conceived at the beginning of 2015. At that time, the more recent list of bibliometrics published by JCR included the year 2013. After our previous articles on this topic were published in 2015 and 2018, we looked forward to being consistent our previous reports [14, 15].
Additional factors, such as longer time frames, the number of articles published in each issue, circulation of each journal, host of factors impacting “citation” (self-citation, semi-mandatory, and mandatory citation) might all influence citation calculations. We did not include these possible confounding factors, as the Web of Knowledge does not consider them. We acknowledge that normalization of journal citations by its article count is desirable. However, we used the raw data provided by the ISI Web of Knowledge at the time we wrote this study to assemble our predictive model. Also, a factor that may affect author where to submit their work were outside the scope of this study: topic, affiliation with society, geography, rejection by earlier submission, familiarity with the submission and revision process, turnaround time and invitation by editors. Our previous publications on this topic were in the Gastroenterology and Hepatology [14] and Neurosciences [15] categories. We do not consider this study a case of self-plagiarism finding the similarities in the used methodology, but a needed proof-of-concept (PoC); we mean the attainment of a specific method to demonstrate its feasibility. Because a PoC aim assesses the real potential of a method for its clinically meaningful effects in the intended population [41], we repeat the same methodology (linear mixed model design) to our target journal category and were able to obtain similar results. Readers should be aware that the JCR includes approximately 171 categories in the sciences and 54 in the social sciences; then, publication of future studies validating our model in other specialties would be desirable.
In conclusion, Impact Factor and Immediacy Index shows no ability to predict 2-year ahead annual-citations, our findings support researchers’ decision to stop the misuse of IF alone to evaluate journals. A re-ranking of journals using Eigenfactor Score, Article Influence Score, and Cited Half-life provides a better assessment of the significance and importance of scientific journals in particular disciplines. Radiologists and other researchers should review these scores for their decision-making during the manuscript submission phase; they may even become a new standard of the quality and validity of the research.
References
Sharma M, Sarin A, Gupta P, Sachdeva S, Desai AV (2014) Journal impact factor: its use, significance and limitations. World J Nucl Med 13(2):146. https://doi.org/10.4103/1450-1147.139151
Kumar V, Upadhyay S, Medhi B (2009) Impact of the impact factor in biomedical research: its use and misuse. Singapore Med J 50(8):752–755
Baethge C (2012) Impact factor—a useful tool, but not for all purposes. Deutsches Arzteblatt Int 109(15):267–269. https://doi.org/10.3238/arztebl.2012.0267
Rawat S (2014) How is impact factor impacting our research? Biomed J. https://doi.org/10.4103/2319-4170.131388
Elliott DB (2014) The impact factor: a useful indicator of journal quality or fatally flawed? Ophthalmic Physiol Optics J Br Coll Ophthalmic Opt 34(1):4–7. https://doi.org/10.1111/opo.12107
Callaway E (2016) Beat it, impact factor! Publishing elite turns against controversial metric. Nature 535(7611):210–211. https://doi.org/10.1038/nature.2016.20224
Jackson A (2010) The impact factor game: the rising impact factor of the British Journal of Radiology—a success story? Br J Radiol 83(986):93–98. https://doi.org/10.1259/bjr/18689409
Ending the tyranny of the impact factor (2014) Nat Cell Biol 16(1):1. https://doi.org/10.1038/ncb2905
Diamandis EP (2017) The Journal Impact Factor is under attack—use the CAPCI factor instead. BMC Med 15(1):9. https://doi.org/10.1186/s12916-016-0773-5
Choudhri AF, Siddiqui A, Khan NR, Cohen HL (2015) Understanding bibliometric parameters and analysis. Radiographics 35(3):736–746. https://doi.org/10.1148/rg.2015140036
Sardanelli F, Sconfienza LM (2013) Declining impact factor of radiologic journals: a matter for debate. AJR Am J Roentgenol 201(3):W391–W393. https://doi.org/10.2214/AJR.12.10256
Shanta A, Pradhan AS, Sharma SD (2013) Impact factor of a scientific journal: is it a measure of quality of research? J Med Phys Assoc Med Phys India 38(4):155–157. https://doi.org/10.4103/0971-6203.121191
Thomson_Reuters (2014) Web of knowledge. http://wokinfo.com. Accessed 19 May 19 2014
Roldan-Valadez E, Rios C (2015) Alternative bibliometrics from impact factor improved the esteem of a journal in a 2-year-ahead annual-citation calculation: multivariate analysis of gastroenterology and hepatology journals. Eur J Gastroenterol Hepatol 27(2):115–122. https://doi.org/10.1097/MEG.0000000000000253
Diaz-Ruiz A, Orbe-Arteaga U, Rios C, Roldan-Valadez E (2018) Alternative bibliometrics from the web of knowledge surpasses the impact factor in a 2-year ahead annual citation calculation: linear mixed-design models’ analysis of neuroscience journals. Neurol India 66(1):96–104. https://doi.org/10.4103/0028-3886.222880
Garfield E (1996) How can impact factors be improved? BMJ 313(7054):411–413
Mathur VP, Sharma A (2009) Impact factor and other standardized measures of journal citation: a perspective. Indian J Dent Res Off Publ Indian Soc Dent Res 20(1):81–85
Bergstrom CT, West JD (2008) Assessing citations with the Eigenfactor metrics. Neurology 71(23):1850–1851. https://doi.org/10.1212/01.wnl.0000338904.37585.66
Bergstrom CT, West JD, Wiseman MA (2008) The Eigenfactor metrics. J Neurosci Off J Soc Neurosci 28(45):11433–11434. https://doi.org/10.1523/JNEUROSCI.0003-08.2008
Ascaso FJ (2011) Impact factor, eigenfactor and article influence. Archivos de la Sociedad Espanola de Oftalmologia 86(1):1–2. https://doi.org/10.1016/j.oftal.2010.12.005
Tabachnick BG, Fidell LS (2013) Multivariate normality. In: Tabachnick BG, Fidell LS (eds) Using multivariate statistics, 6th edn. p 253
Peugh JL (2010) A practical guide to multilevel modeling. J Sch Psychol 48(1):85–112. https://doi.org/10.1016/j.jsp.2009.09.002
Shek DT, Ma CM (2011) Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations. Sci World J 11:42–76. https://doi.org/10.1100/tsw.2011.2
Field A (2013) Multilevel Linear Models. In: Field A (ed) Discovering Statistics using SPSS, 4th edn. SAGE Publications Inc., London, pp 734–776
Pallant J (2011) Multiple regression. SPSS survival manual. Allen & Unwin, Crows Nest, pp 148–167
Cohen JW (1988) Statistical power analysis for the behavioral sciences, 2nd edn. Lawrence Erlbaum Associates, Hillsdale
Singer JD (1998) Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models. J Educ Behav Stat 23(4):323–355. https://doi.org/10.3102/10769986023004323
Butler JS, Sebastian AS, Kaye ID, Wagner SC, Morrissey PB, Schroeder GD, Kepler CK, Vaccaro AR (2017) Understanding traditional research impact metrics. Clin Spine Surg 30(4):164–166. https://doi.org/10.1097/BSD.0000000000000530
Reider B (2017) Brace for impact. Am J Sports Med 45(10):2213–2216. https://doi.org/10.1177/0363546517721707
Gutierrez FR, Beall J, Forero DA (2015) Spurious alternative impact factors: the scale of the problem from an academic perspective. BioEssays 37(5):474–476. https://doi.org/10.1002/bies.201500011
Citrome L (2013) How we rate: is impact factor the most important measure? Int J Clin Pract 67(9):819–820. https://doi.org/10.1111/ijcp.12266
Zupanc GK (2014) Impact beyond the impact factor. J Compar Physiol A Neuroethol Sens Neural Behav Physiol 200(2):113–116. https://doi.org/10.1007/s00359-013-0863-1
Kodumuri P, Ollivere B, Holley J, Moran CG (2014) The impact factor of a journal is a poor measure of the clinical relevance of its papers. Bone Joint J 96-B(3):414–419. https://doi.org/10.1302/0301-620x.96b3.32279
Sillet A, Katsahian S, Range H, Czernichow S, Bouchard P (2012) The Eigenfactor Score in highly specific medical fields: the dental model. J Dent Res 91(4):329–333. https://doi.org/10.1177/0022034512437374
Rizkallah J, Sin DD (2010) Integrative approach to quality assessment of medical journals using impact factor, eigenfactor, and article influence scores. PLoS One 5(4):e10204. https://doi.org/10.1371/journal.pone.0010204
Oosthuizen JC, Fenton JE (2014) Alternatives to the impact factor. Surgeon 12(5):239–243. https://doi.org/10.1016/j.surge.2013.08.002
Yao L, Wei T, Zeng A, Fan Y, Di Z (2014) Ranking scientific publications: the effect of nonlinearity. Sci Rep 4:6663. https://doi.org/10.1038/srep06663
Fersht A (2009) The most influential journals: impact Factor and Eigenfactor. Proc Natl Acad Sci USA 106(17):6883–6884. https://doi.org/10.1073/pnas.0903307106
Weale AR, Bailey M, Lear PA (2004) The level of non-citation of articles within a journal as a measure of quality: a comparison to the impact factor. BMC Med Res Methodol 4:14. https://doi.org/10.1186/1471-2288-4-14
Casadevall A, Fang FC (2014) Causes for the persistence of impact factor mania. mBio 5(2):e00064–e00014. https://doi.org/10.1128/mbio.00064-14
Chen C (2018) Opportunities and pitfalls in clinical proof-of-concept: principles and examples. Drug Discov Today. https://doi.org/10.1016/j.drudis.2018.01.045
Acknowledgements
Ulises Orbe-Arteaga, MSc, was a research fellow at the MRI unit of Medica Sur Clinic and Foundation during 2014–2015. No IRB approval was required for this manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflict of interests to disclose.
Ethical standards
This article does not contain any studies with human participants or animals performed by any of the authors.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Roldan-Valadez, E., Orbe-Arteaga, U. & Rios, C. Eigenfactor score and alternative bibliometrics surpass the impact factor in a 2-years ahead annual-citation calculation: a linear mixed design model analysis of Radiology, Nuclear Medicine and Medical Imaging journals. Radiol med 123, 524–534 (2018). https://doi.org/10.1007/s11547-018-0870-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11547-018-0870-y