Clarivate Analytics completed its transition from Thomson Reuters’ Intellectual Property & Science business, including the journal impact factor (JIF), purchased by Onex and Baring Private Equity Asia in July of 2016 (Marketwatch 2016). The transition was fairly smooth, and the JIF is still computed according to the concept formulated by Eugene Garfield and Irving Sher, 60 years ago (Garfield 2006). Nowadays, the JIF is often erroneously used as a proxy to assess the quality of research or a researcher, leading to a movement (DORA; http://www.ascb.org/dora/) that has been pushing for it to be scrapped from academics, not only because this metric is heavily gamed, but also because it is a non-academic barometer that can be easily abused (Callaway 2016). Over decades of use, the JIF has become too entrenched in the cultural fabric of academia at a global scale to ignore it completely, although we believe that it could soon become obsolete. To give a somewhat false notion of moving away from this metric, variations such as the Source Normalized Impact per Paper (SNIP) have simply been devised. However, since the SNIP is essentially a JIF-based metric, it inherits almost all misuses of the JIF (Larivière et al. 2016). Provided that academics, including authors and editors, continue to use and associate the JIF as a prime indicator of quality, which it is not (Favaloro 2008), and provided that senior authorities in academic bodies continue to reward their academics based on their JIF scores (per author, per article, or per journal), it will be difficult to visualize a cure for this endemic obsession, sometimes referred to as IF mania or Impactitis (Casadevall and Fang 2015).

The argument can be made that a highly cited paper is one that carries a high paper-based JIF, such as the SNIP, and has proved useful for a wide swathe of researchers, as measured by the number of times it has been cited for a period of time. In that sense, a high number of citations for a paper can be equated with an indication of its usefulness, but does not necessarily reflect its intrinsic quality, or that of the journal in which it was published. For example, it is now well established that high-ranked journals on the JIF scale have a tendency to also have a high rate of retractions (Woolston 2014). In addition to the argument as to whether the JIF reflects quality, importance or usefulness, its abuse may be the greatest problem, because it has become institutionalized. For example, in China, Iran, Indonesia, Brazil, Russia (see country- and region-wide examples in several papers in Teixeira da Silva 2013a; e.g., p. 38, 46, 57, 61, 66) or Mexico, the JIF of journals in which scientists publish their results are blatantly used in remuneration schemes that financially reward academics based on these JIF scores. A sample of such policies may be found in the scheme implemented in 1984 in Mexico, known as Sistema Nacional de Investigadores (SNI), with the aim of supporting a current staff of ca. 25,000 researchers. The economic support for these individuals is on a scale between 1 and 5, and is based essentially on their portfolio of publications. As set out in the documents of the SNI, metrics used during the periodic evaluations of researchers are the JIF and the number of citations received, omitting self-citations and citations in theses (CONACyT 2016). Although this economic scheme certainly prevented a brain drain and boosted Mexican scientific production over the past 30 years, it has also been the crucible for deviant behaviors, the most visible being the steady growth of guest authorship (Gómez Nashiki et al. 2014). So, whatever arguments exist in favor of the possible relation between a JIF and quality, the potential usefulness of the JIF becomes erased by its global abuse in academics, and the metric seems to survive only because of its use in large institutional schemes.

When the JIF was still owned by Thomson Reuters, a rewards system was put into place to offer some form of recognition to the most highly cited researchers, referred to as Highly Cited Researchers (HCRs).Footnote 1 Each year, a few months after the new JIF has been published, the lists of HCRs are released, leading to great jubilation among scientists, laboratories and institutes whose staff appear among this exclusively cited elite. In 2016, Clarivate Analytics was to have celebrated its first HCR announcement, following in the tracks of its predecessor. Unfortunately, something went grossly wrong during this process. Clarivate Analytics sent a congratulatory email and a link to the digital badge and certificate that accompanies the HCR announcement to an undisclosed number of scientists (Appendix 1). The problem is that several (perhaps the majority) of these were in fact not HCRs. What followed was a fairly chaotic scene on Twitter (some samples in Fig. 1). Some scientists made mockery of the situation, while others sounded totally baffled, and delighted, at having been selected as a HCR. Within hours, Clarivate Analytics contacted those who had been erroneously rewarded to indicate that a mistake had occurred (Oransky 2016) (Appendix 2), while taking the opportunity to praise HCRs.

Fig. 1
figure 1

Screen-shots of Tweets showing the shock, surprise and dismay—almost a tragicomedy—by non-HCRs at first being awarded the HCR badge, then having it unceremoniously taken away. Tweet URLs listed next are from left to right, top to bottom, in this order: https://twitter.com/SteZhao/status/799618638772740096. https://twitter.com/PanuMinkkinen/status/799620607524618240. https://twitter.com/eknahm/status/799622318838272001. https://twitter.com/dandavishello/status/799626022878519296. https://twitter.com/DAZaitsev/status/799634568554942464. https://twitter.com/mrioslago/status/799645295026049025. https://twitter.com/SanTonyB/status/799652112040914944. https://twitter.com/RMGiurgiu/status/799662013412675584. https://twitter.com/GemHols/status/799672165608001537. https://twitter.com/AnnekeBatenburg/status/799683243465441280. https://twitter.com/turner_hana/status/799817504121946112. https://twitter.com/rigasarva/status/800022421260607489

On November 21, 2016, the first author contacted Heidi Siegel, the Director of External Relations at Clarivate Analytics to provide comment and explanation, and to indicate the precise number of researchers that had been erroneously awarded the HCR badge for a few hours. The response was received within 24 h: “Regarding information about a precise number of scientists who were sent the wrong highly cited researcher email last week, there were a number of people who received the letter in error. However, the number we should focus on are the 3265 Highly Cited Researchers (HCRs) for 2016 who are to be celebrated. Highly Cited Researchers derive from papers that are defined as those in the top 1% by citations for their field and publication year in the Web of Science. As leaders in the field of bibliometrics we appreciate the effort required to reach this achievement and celebrate those who have done so this year.”

This statement is of some concern because it cements the notion that Clarivate Analytics is trying to underscore the seriousness of the technical error, while continuing to promote the now old-fashioned (since the launch of Elsevier’s CiteScore) and academically useless JIF as a marketing and gaming tool in academia.

In essence, there has been no change in the JIF-gaming mentality in the transition from Thomson Reuters to Onex and Baring Private Equity Asia. The lack of transparency and accountability that had been pointed out in detail in 2013 (Teixeira da Silva 2013b) thus continues. The integrity of the published JIF will continue to be questioned by many researchers, as long as the database used to calculate it is not made available, or, even worse, manipulated in a way that suits the interests of Clarivate Analytics customers (Rossner et al. 2007). On the other hand, it is unclear why Clarivate Analytics considers that the number of researchers who were erroneously contacted and congratulated for being HCRs in 2016 should be kept secret. With such opaque practices, how can the community trust the JIF-related figures calculated by this company?

There is a common feature between HCRs and JIF-related metrics: both have highly skewed distributions, and the 3265 HCRs identified by Clarivate Analytics for 2016 obviously represent considerably less than 1% of the global scholarly community. Although we agree with Clarivate Analytics that the “Highly Cited Researchers 2016 represents some of world’s most influential scientific mindsFootnote 2 and therefore that most of those HCRs set a good example to follow, we remain convinced that the delivery of badges finally sounds as a futile exercise, with no true academic value, and based on unverifiable data. Moreover, Clarivate Analytics seems to have missed the target, by confusing the prestige (deserved or not) of researchers and the actual weight of their research, increasingly carried out by large collaborative teams including essentially non-HCRs. By focusing on the wrong target, or not entirely on the full complement of targets, Clarivate Analytics is perpetuating the prevalence of vanity in science, and the HCRs badges are reminiscent of the optical illusion created by the American illustrator Charles Allan Gilbert (Kearl 2015),Footnote 3 illustrating the sentence found in the Latin Vulgate: Vanitas vanitatum, omnia vanitas.