Introduction

There are increasing voices alerting on the uncertainty of the future of the scholarly monographs (e.g. Williams et al. 2009; Watkinson et al. 2016). Still, they remain the primary academic output in the arts, humanities and some social sciences (Nederhof 2006; Huang and Chang 2008). Indeed, the 2014 UK Research Excellence Framework (REF 2014) reported that book submissions represented 55% for the humanities, 33% for the arts, and 22% for the social sciences of all submissions in these fields. On the other end, books represented about 0.5% of all submissions in science, engineering and medicine (Kousha et al. 2016). Despite this, books are still not fairly assessed in evaluation exercises. Until recently, books were absent in the main bibliometric databases, leading to a devaluation of monographs as a secondary scientific product. Due to the pressure exerted by national evaluation schemes, many researchers shift or have shifted from books to journal articles as their preferred dissemination channel (Research Information Network 2009). Furthermore, almost all university rankings, including the ones based solely on bibliometric data—like the Leiden Ranking—ignore them even in disciplines where they play a crucial role. There are only few bibliometric analyses with evaluative purposes considering their importance in such fields, and all of them corroborate the important role of monographs and the significant contribution to citation analyses (e.g. Kousha and Thelwall 2009; Gorraiz et al. 2016).

The assessment of the impact of monographs is nowadays a big challenge and a hot topic in the scientometric field. Citation analyses are an acceptable proxy for the measurement of the impact of research publications, but only for a subset of the scientific community, namely the ‘‘publish or perish’’ group and only of the impact reflected by documented scholarly communication. However, it is common knowledge that many disciplines address much broader audiences within the academic community and even beyond. Monographs can have educational or public interest value as well as research impact (Kousha and Thelwall 2015c) and they can aim to enrich culturally non-academic audiences (Small 2013). In this context, new metrics and specifically usage metrics (Gorraiz et al. 2014b; Glänzel and Gorraiz 2015) and altmetrics (Priem 2014; Robinson-Garcia et al. 2014), have the potential to apply alternative evaluation methods that complement citation-based indicators. Gaining a much broader and more accurate picture of the impact of monographs.

The launch of the Book Citation Index (BKCI) in 2011 enabled and eased access to citation data for large collections of books. It opened the floor for a large amount of citation studies on the citation patterns, characteristics and peculiarities of books (e.g. Kousha et al. 2011; Leydesdorff and Felt 2012; Gorraiz et al. 2013; Torres-Salinas et al. 2012; 2013; 2014a, b). Still, many shortcomings must be surpassed before being able to use citation data for evaluative purposes. Some of these shortcomings are due to coverage and technical issues of the data sources (Torres-Salinas et al. 2014a) while others are related with the design and conceptualization of the indicators (Chi 2016). Additionally, other approaches have been suggested in the literature. Following, we mention the main ones:

Library catalog analysis

Based on the use of library holdings per book title. Here several approaches are presented. Torres-Salinas and Moed (2009) used the number of catalog entries per book title in the WorldCat catalog (Torres-Salinas and Moed 2009). Linmans (2010) used library bindings (Linmans 2010), while White et al. (2009) even considered it as an indicator of perceived cultural benefit.

Library loans

Influenced partly by the Library Catalog Analysis method and by the work of Schlogl and Gorraiz (2006), Cabezas-Clavijo et al. (2013) suggest that library loans may be a potential proxy for measuring the use of books. However, problems related with data cleaning and missing data prevent from further expanding this methodology.

Publishers’ prestige

Here we observe two approaches. Giménez-Toledo et al. (2013) elaborated a publishers’ ranking based on experts’ opinions. This approach has been implemented with different methodological variations in countries such as Spain, Denmark, Finland or Norway (Giménez-Toledo et al. 2016). Torres-Salinas et al. (2012) used a more traditional approach based on citations from the Book Citation Index to develop a set of indicators by publisher and field, simulating the Journal Citation Reports.

Book reviews

If we consider the book as the main unit of analysis (disregarding publishers or book chapters), book reviews, which are extensively used for informing on recently released books, could be used to quantify the value of books. Zuccala and van Leeuwen (2011) were the first ones to suggest such approach. Since, other studies have followed. For instance, Gorraiz et al. (2014a) suggested this methodology not as a substitute, but as a complement to surpass coverage limitations from the Book Citation Index. Another perspective is that where social platforms for books are used to retrieve users’ opinions and reviews (Kousha and Thelwall 2015a, b; Kousha et al. 2016; Zuccala et al. 2015).

As observed, in this wide variety of proposals, little consensus can be found on which is the best proxy of quality or impact to use. The unit of analysis differs depending on the proposal. While in some cases, they focus on books, in others the focus is on publishers or in book chapters. Also, with some exceptions (Zuccala et al. 2015), there is no conceptual analysis on the meaning of the different impact proxies used (i.e., citations, library holdings, etc.).

There is an increasing interest on the use of altmetrics for analysing scholarly impact (Priem 2014) and many studies have been devoted towards analysing its potential and caveats (i.e., Costas et al. 2015; Haustein 2016; Thelwall et al. 2013). Also, many commercial solutions are currently available to recollect social media mentions. So far, the main one being used in research is Altmetric.com. Altmetric.com is a company owned by Digital Science which recollects mentions from a wide variety of social media platform to scientific publications (Robinson-Garcia et al. 2014). However, most of these studies are mainly focused on journal articles. Recently, new sources and collaborative projects have been launched to cover this gap. On the one hand, altmetric.com launched in collaboration with Springer, the Bookmetrix project (http://www.bookmetrix.com), in which altmetric indicators are provided at the book and book chapter level for all Springer publications.

Another commercial solution is the offered by EBSCO (and recently acquired by Elsevier); the PlumX suite. This platform includes books as well as journal articles and provides a wide range of indicators as well as altmetric indicators. This allows considering simultaneously many of these proxies and offer a multidimensional approach of the impact of monographs. Hence, comparing the results provided by each one, and identifying the more relevant indicators. Altmetric indicators have only been explored for books by focusing on specific altmetric indicators (Kousha and Thelwall 2015). This study is a first explorative attempt to provide such a multidimensional approach. We analyse 18 different indicators related with citations, downloads, library holdings and altmetrics for a given university. The main goal is to study the relation of these different dimensions altogether to better comprehend how they complement each other and in which cases certain indicators may provide more or less information than others. Specifically, we pose the following objectives:

  1. 1.

    Analyse the different dimensions present in PlumX based on the 18 indicators they provide and its potential usefulness for research evaluation purposes.

  2. 2.

    Explore to what extent these indicators complement the information reported by traditional evaluations focused on journal output.

For this, we analyse the output of monographs of the University of Granada. This is the first study using PlumX to analyse the scholarly impact of monographs. It is also the first study adopting a multidimensional perspective for the assessment of the broad impact of books.

Materials and methods

This paper analyzes and compares a set of 18 indicators for a set of monographs. For this, we take as a sample a set of monographs published by the University of Granada between 2010 and 2016, and retrieved from the Andalusian Current Research Information System (CRIS). This allows working with a large data set where all scientific fields are represented. In this section, we describe the data collection process, the characteristics of the data sources employed and the indicators used. First, we describe our publication data collection in order to give an overview of the coverage by fields, how was data collected, etc. Next, we describe the data source employed to obtain the impact indicators: PlumX.

Publication data collection

By the end of September, 2016, a data set of monographs published by the University of Granada between 2010 and 2016 was retrieved from the Andalusian CRIS (known by its Spanish acronym SICA). For this, all records including an ISBN number and registered as books were identified and processed into a relational database. Figure 1 includes the annual distribution of monographs published during the period of study. A total of 2957 books were retrieved, from which 24% were published in 2010. Since then we observe a declining trend on the production of books. This could be mainly since the information included in SICA is self-reported and it is not mandatory, hence this trend cannot be interpreted as a decline on the production of monographs.

Fig. 1
figure 1

Number of monographs published by researchers from the University of Granada according to SICA in the 2010–2016 period

Indicators used and PlumX as a data source

In this study, we analyse 18 indicators using three different proxies of impact: citations, downloads, library holdings and social media mentions. There are currently three big tools collecting and aggregating altmetric data: ImpactStory, Altmetric.com, and PlumX. Whereas altmetric.com and PlumX focus more on gathering and providing data at a large scale, ImpactStory’s target group is the individual researcher who wants to include altmetric information in her CV (Peters et al. 2016). Also, PlumX is the only one which provides other alternative metrics along with citation data, as well as social media mentions. For this study, we are using PlumX (the fee-based altmetrics dashboard). The data were gathered from PlumX in October 2, 2016. And they were permanently checked until the end of December 2016. No changes were reported in this period. However, further research might attempt to clarify the stability and reproducibility of altmetrics data, and provide thorough and transparent information regarding their temporal evolution and to trace and understand potential score changes.” PlumX uses ISBN numbers as book identifiers. Although no other alternative solution has been proposed so far, this imposes some limitations which should be noted. As indicated by Zuccala and Cornacchia (2016), multiple ISBN numbers may be assigned to the same ‘work’ due to the publication of new editions, translations or to its reprint and distribution by a different publisher.

Andrea Michalek and Mike Buschman founded Plum Analytics in early 2012. In 2014, it became Plum, a subsidiary of EBSCO Information Service.Footnote 1

Metrics are categorized in PlumX in five separate types: usage, captures, mentions, social Media, and citations. Table 1 includes the 18 indicators included in this study categorized by PlumX and their source of origin as well as the type of data source. Following we briefly discuss each of these categories:

Table 1 Data gathered in PlumX for this study according to their source of origin

Usage

This category includes abstract views, downloads, links-outs, library holdings and video plays from different data sources like DSpace, EBSCO and WorldCat. As observed, two types of indicators are considered here, those related with electronic usage (downloads, views, etc.) and those related with usage in print format. The former has been traditionally referred to as usage bibliometrics, a concept coined by Kurtz and Bollen (2010) ad derived from the need to quantitatively assess the use of the electronic collections of libraries. The analysis of library holdings as a potential indicator for research assessment was originally envisioned by Torres-Salinas and Moed (2009) and White et al. (2009). In this case, the use of library holdings was specifically proposed to measure the dissemination of monographs.

Mentions

It includes blog posts, comments, reviews and links from different tools like Wikipedia, Goodreads or Amazon. What we find here are mainly altmetric indicators which focus on mentions from social media platforms. Altmetrics, term coined by Jason Priem in a tweet (Priem 2010) are defined in a rather ambiguous way as any type of mention to scientific literature in any type of social media platform. While the value of altmetrics in research evaluation is still largely questioned (Sugimoto et al. 2017; Wilsdon et al. 2015), it has been suggested that it could be a plausible means to measure broader forms of impact (Bornmann 2014). In this case, most of the indicators in this section are social reviews, an indicator suggested as potentially relevant to assess the impact of books (Kousha and Thelwall 2016).

Captures

This category, also includes altmetric indicators. In this case, we find bookmarks, code forks, favorites, readers and watchers from different tools like Mendeley, Goodreads or EBSCO. These are indicators usually related with readership metrics (Haustein 2014). In the case of Goodreads, this data source has also been explored as to regard to its potential to assess the impact of books (Kousha et al. 2016; Zuccala et al. 2015).

Social media

This category includes + 1 s, likes, shares, tweets from tools like Twitter or Facebook. These are altmetric indicators largely explored in the literature (especially with regard to Twitter), where no relation has been found with citations. While they cover a large portion of the journal literature, their relevance is still under question (Thelwall et al. 2013).

Citations

In this case, citations are retrieved from Cross-Ref, Scopus or patent and clinical citation data sources.

This categorization may be subject of criticism, but one its advantages is that the results are differentiated according to the indicator and their origin and can be aggregated according to the user’s criteria.

Table 2 shows an example of the information retrieved from PlumX for two books included in our dataset.

Table 2 Example of indicators obtained from PlumX for two monographs published by researchers from the University of Granada

Results

This section is structured in two parts. Each subsection is related with one of the specific objectives of the paper. First subsection analyses the coverage and distribution of the 18 indicators for the total production of books of the University of Granada during the 2010–2016 period. The second subsection compares the coverage by fields of citation indicators based on journal output with the coverage of PlumX indicators based on book output.

Coverage and distribution of 18 impact indicators

2299 books were identified in PlumX, representing 78% for our original dataset. However, 1382 books had no impact metric related to them. This represents 60% of the sample. Figure 2 shows for those which had metrics related to them, the distribution of indicators by metric category. As observed, 79% of the indicators are related with usage, followed by 20% of indicators related with captures. Significantly, mentions, citations and social media represent only 1% on the metrics identified. Within the usage category, the most predominant indicator is that related with library holdings obtained from the WorldCat catalogue (48%), followed by abstract views (22%). The low coverage for all indicators is reflected in Table 3, where the indicator with the highest coverage (library holdings) only includes 31% of the total sample, followed by far by Mendeley readership (19%).

Fig. 2
figure 2

Distribution of indicators retrieved according to their category

Table 3 Coverage and statistical indicators for metrics extracted from PlumX for the 2010–2016 period. Numbers in italics indicate traditional citation indicators

In all, we observe that five of the indicators practically did not cover any of the records included in our sample (abstract views from DSpace, data views, downloads, tweets and social media), and seven covered less than 10% of the records (sample downloads, PDF views, HTML views, exports-saves, readers from Goodreads, reviews from Amazon and Goodreads, and links). The category with the lowest coverage is Social Media. Contrarily to what we observe with journal data (Robinson-Garcia et al. 2014), the coverage of Twitter is extremely low. Only four books include mentions in Twitter. Practically 100% of the records in our sample had no mentions in Twitter nor Facebook.

When focusing on the number of hits by book received for each indicator, we observe again, low figures on the average of metrics by book. Indeed, in 13 of the indicators used, the average number of hits is below one. However, we note considerable differences between the other indicators. Library holdings shows the highest average of hits by book (19.8) followed by abstract views in EBSCO and readers in Goodreads (6.9). The relation between the former and the latter has been suggested elsewhere (Zuccala et al. 2015) as an explanation for finding such a high average of readers despite its low coverage. In most cases, we also observe high deviation values, signifying the skewed distribution of these indicators, following the pattern of citation distributions. This is observed from the maximum number of hits reached by indicators. The largest number of hits for single books is found for readers in Goodreads with almost 10,000 hits, followed by abstract views in EBSCO (2084) and library holdings (1271).

The skewness of the distribution is confirmed by Fig. 3, where we analyse for the different categories of indicators the distribution of hits by the number of books. While all categories show a skewed distribution, the one with the lowest skewness is the category of usage.

Fig. 3
figure 3

Distribution of hits by number of books according to the categories of indicators defined by PlumX for monographs published by researchers from the University of Granada in the 2010–2016 period

Comparing a citation analysis of journal output versus a multidimensional analysis of book output

At this stage, a key question is the extent to which the indicators offered by PlumX are useful. For this, Fig. 4 compares the output of the University of Granada and their impact depending on the document type and the impact indicators employed. Figure 4a and c show the university’s output based on the number of published books and journal articles respectively. As observed, Humanities & Arts (1175) and Social Sciences & Law (648) are the fields with the largest number of published books. Contrarily, when focused on journal articles, it is Natural & Exact Sciences (7420) and Engineering & Technology (2406). A similar pattern we observe in Fig. 4b and d. Indeed, Fig. 4c and d show the classical distribution of publications and citations based on Web of Science journals.

Fig. 4
figure 4

Comparative of approaches taken to analyse the scientific output of the University of Granada: Books versus Journal articles. 2010–2016 period. a # books by field. b # metrics collected by PlumX. c # Web of Science papers by field. d # citations by field

Clearly, the Humanities & Arts and Social Sciences & Law are the most negatively affected fields by bibliometric analyses: they have “little” output and “little” impact. It should be stressed that “little” is not used as a pejorative term but following the long tradition based on the famous book “Little Science, Big Science” by De Solla Price. Scientific publishing is a very complex activity and differs according to the different publication communities. Therefore, it is not possible to assess it by just counting together different publication outputs, like for example books and journal articles. It should be considered that the time involved in the writing and publishing of a book is much longer in comparison with a journal article. Concerning impact, the term “little” is even more disputable because there is a very strong dependence on the metric used or available as this study corroborates.

However, when focusing on alternative metrics such as the ones provided by PlumX, a completely different picture is shown. Figure 4a and b show and opposed view where these fields are the ones best represented. Combining both approaches we can provide a more accurate picture of the scientific impact and output, by introducing a neglected output (books) and more appropriate metrics to analyse their impact (e.g. library holdings). This avoids current mismatches in bibliometric analyses by broadening out the scope of outputs and opening up the type indicators used (Rafols et al. 2012). Still, it should be noted that the indicators availability is limited to a reduced percentage of the output sample and, therefore, that complementariness is not always achieved.

Discussion and concluding remarks

This study analyses the coverage and distribution of 18 indicators retrieved from PlumX of scholarly impact for books published by the University of Granada during the 2010–2016 period. These indicators are grouped into five categories, each aimed at showing different dimensions of impact. These are usage, mentions, captures, social media and citations. The aim of the paper is twofold. First, to analyse the coverage of PlumX indicators for monographs. Second, to compare traditional citation analyses based on journal articles with this multidimensional perspective offered by PlumX.

Sixty percent of the books included in our sample showed no values for any of the 18 indicators analysed. While this coverage may seem low, it is actually higher than that reported for citation. Torres-Salinas et al. (2014a) reported an uncitedness rate of 80.5% for the Book Citation Index. Usage indicators and specifically, library holdings were found to be the most comprehensive indicator for monographs. Seventy nine percent of books showed some values for this indicator. Contrarily, indicators such as mentions or social media and citations were almost lacking (see Table 2). In this sense, it is worth mentioning the low figures found for tweets, an altmetric indicator which has been found to be the most widely-used data source for altmetrics (Thelwall et al. 2013). This could be related with the current crisis observed in the book publishing industry, where digital publishing has not expanded as much as with e-journals (Williams et al. 2009) and Open Access remains a challenge (Eve 2014). This lack of mentions in social media could be due to the impossibility to access electronically to books.

Regarding the second goal. The comparison between approaches based on books and a variety of alternative indicators versus those based on journal articles and citation-based indicators, shows once again, the importance of monographs in the social sciences and humanities. But also, the limitations of citation indicators to fully capture their impact. As pointed out in previous literature (e.g. Torres-Salinas and Moed 2009; White et al. 2009), library holdings seem to be the most promising proxy of scholarly impact. Recent studies based on the Book Citation Index, show that citations are too scarce as to be considered as an appropriate impact measure for books (e.g. Torres-Salinas et al. 2014a, b). However, further research is still needed to surpass the many technical issues present when matching metadata from different sources with regards to monographs (Zuccala and Cornacchia 2016).

The results shown here explore the potential interest on the variety of indicators offered by PlumX. But still, the results of this study rely very much on the features and abilities of PlumX.

Many of them need to be studied in more detail, especially the ones concerning the correctness, validity and stability of the resulting data.