Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Historical Roots of Library Access

The advent of the internet has revolutionized the ways in which libraries serve their users by facilitating the expansion of collections and enhancing access to those collections. However, though the internet has had an unprecedented and profound impact on libraries, other innovations, also ground-breaking in their respective times, have set the stage for the development of the modern library by facilitating access to information. Among these developments are the invention of the Gutenberg printing press , the collocation of library materials by subject, and the use of assignment indexing .

2 Impact of the Gutenberg Printing Press

Since their inception 4500 years ago, libraries have strived to fulfill two functions that appear, on the surface, to be contradictory. On the one hand, they have sought to serve users by making information in its many forms as accessible as possible. On the other, they have needed, at times, to restrict access to information in order to preserve and protect it for future users and future generations. However, throughout history there has been an undeniable trend toward increasing user access to information as it has become easier to record or publish, and less expensive to acquire.

From ancient through Medieval times , publishing was incredibly meticulous work. In most parts of the world, information were recorded painstakingly by hand using media such as stone, clay tablets, papyrus, and animal skin. As one might imagine, texts produced by such arduous methods, many of them existing in very limited quantity or even as one-of-a-kind specimens, were often treated as precious objects [1]. Accordingly, evidence points to the fact that some early libraries were Draconian in their role as guardians of their collections. Some of the earliest and most severe library rules, in the form of entreaties to the gods to punish irresponsible borrowers or thieves were inscribed on some of the clay tablets (see Fig. 12.1) kept in early Mesopotamian archive-libraries : “Whoever removes [the tablet]…may Ashur and Ninlil, angered and grim, cast him down, erase his name, his seed, in the land” [2]. “He who enstrusts it to [other’s] hands, may all the gods who are found in Babylon curse him!” [1]. Books in some medieval libraries were chained to furniture to prevent theft, but were cut from their bindings occasionally by persistent thieves [1].

Fig. 12.1
figure 1

Clay tablet recounting the tale of a battle between two gods, found at the site of the Assyrian civilization of Ninevah [3]

Though there is little evidence to suggest that the borrowing of materials from ancient or Medieval libraries was permitted frequently, an inscription found in an ancient Athenian library states that the “directors had decided to eliminate borrowing,” suggesting that it was allowed for a time [2] (Casson 107). Access to the collections of early libraries was typically limited to nobility, clergy, and scholars, though once again, history provides exceptions such as the Roman bath libraries of Nero ’s era, which were open to all Romans, regardless of class, gender, or age [2].

Progressive printing techniques were introduced early in the Far East. Paper was in use as early as the Western Han dynasty (221–224 B.C.E.) and multiple copies of texts were printed using hand-carved wooden blocks , the earliest example of which dates back to 8th century Korea [4]. The Chinese had even experimented with movable type by the mid 11th century. However, movable type was hardly the game-changing breakthrough in China that it would be in Europe 400 years later, considering the expense and effort involved with producing stamps, or “types” for each of the thousands of characters of the Chinese language that might constitute a literary work [4].

According to library studies scholar Leila Avrin, “no historian believes seriously that Chinese printing directly inspired the European invention” [4]. However, paper did spread from China to Europe, albeit slowly, by way of Korea and Japan. According to an Arabic text dated 1482, paper was being made in the Islamic empire by the early eighth century [4]. It would take the next 600 years for papermaking as a technology to spread from Muslim Spain to Christians in Spain and then to much of the rest of Europe [4].

Papermaking reached Mainz, Germany in the 1320s [4]. In that same city, in approximately 1450, Johannes Gutenberg introduced a wooden hand-press that employed metal movable type , which was a more feasible prospect in Europe than in Asia given the relatively limited quantity of letters in the alphabets of Romance and Germanic languages. The effects of the Gutenberg press and successive versions of it on the availability of books were profound. In Europe before 1500, at most, a book might be available in one hundred copies and read by thousands of people [5]. After 1500, however, thousands of copies of a book could be available and could be read by hundreds of thousands of people [5]. The growth of European libraries during this period was enormous compared to the holdings of libraries during the Medieval period, partially as a result of the increased availability of books and the relative drop in their cost attributed to the Gutenberg press and successive versions of the device [5].

The holdings of college libraries in some cases expanded from under 1000 items to hundreds of thousands of items [1]. The availability of printed material, in turn, increased literacy rates and drove up the demand for books, which fueled the growth of the book trade [5]. Thus, the expansion of libraries was a direct result of the increase of supply and demand [1].

Books, though rare by today’s standards, were no longer considered priceless. Consequently, libraries relaxed in terms of their role as guardians of information, expanding services to wider populations and allowing users greater access to materials. For example, Cardinals Richelieu and Mazarin, who served as chief French ministers, collected so many books that they hired a full-time librarian to organize the collection, which was open to “everybody” in 1661 and considered by many to be the “best library of the time” [1]. By the late 1600s, thirty-two Parisian libraries and 3 national ‘public’ libraries were accessible to general readers [1]. (However, French public libraries catered more to scholars than the public in terms of their collections until the early 1900s [1].) Around the same time in Britain, parish churches made small libraries available to the public [5].

The demand for a wide variety of reading material, including popular items such as novels, was high enough by the late 1700s to mid-1800s that people who lacked access to libraries were willing to pay for it. During this time in America and parts of Europe, subscription or dues-based access to collections at “social libraries” and commercial book rental services known as “circulating libraries ” gained popularity [1]. However, by the mid-1800s public libraries had begun to expand and proliferate.

The first American free public library funded by taxes opened in Peterborough, New Hampshire in 1833 [1]. In Britain from 1847 to 1850, however, the history of the modern public library began in earnest when Parliament passed a series of acts that led to the establishment of tax-supported public libraries throughout the country. As a direct result, by 1900, 300 public libraries had been established [5]. Public libraries made significant strides in America, France, Germany and Japan in the mid to late 1800s, some enabled through legislation and others through charitable organizations such as the Franklin Society, as was the case in France [1]. However, the cause of library access received its most significant boost in the form of $56 million in funding by steel baron Andrew Carnegie , a Scottish immigrant who had made his fortune in the United States. In English-speaking countries throughout the world during the late 19th and early 20th centuries, more than 2509 libraries, many of them public (see Fig. 12.2), were established through Carnegie’s philanthropy [6]. The chain-reaction started by Gutenberg’s invention had rippled far and wide; libraries, and print-based information, were finally available to the masses.

Fig. 12.2
figure 2

Carnegie Public Library (now Carnegie History Center) in Bryan, TX. Photo by Flickr.com user Edwin S. used under Creative Commons License

3 Collocation and Assignment Indexing

Another major breakthrough in terms of user access to library materials has come in the form of two organizational innovations which go hand-in-hand: collocation and assignment indexing . Collocation is the grouping together, whether in a catalog or physical collection, of materials by type. Modern libraries using the Dewey Decimal or Library of Congress classification systems achieve collocation by assigning a call number to each item, which is a precise code denoting where the item is to be shelved. Coded within that call number, typically, is the item’s subject focus (astrophysics, for example), or genre (fiction, for instance). Beyond the first portion of the call number indicating the general subject focus or genre of a work, a further subdivision is often made by author’s surname, geographic focus, or some other narrower category (see Fig. 12.3).

Fig. 12.3
figure 3

Anatomy of a Library of Congress call number

This arrangement maximizes the potential for serendipitous discovery while exploring a library collection or catalog, as a user setting out to retrieve a particular item may encounter a trove of items on their topic of interest located or listed nearby. Shelf collocation can be reproduced virtually in many online library catalogs through a call number search feature. As demonstrated in Fig. 12.4, by specifying a call number or range of call numbers, a group of records organized in call number order can be browsed virtually before going to the shelf (though some electronic resources will be listed only in the catalog since they cannot be shelved).

Fig. 12.4
figure 4

Results of a call number range search targeting items on natural disasters demonstrating the collocation of items by subject, as they would be collocated on the shelf

Collocation, though useful, poses a challenge for catalogers . Since an item cannot be in more than one place at one time, collocation requires a cataloger to decide on just one subject focus or genre for the purpose of locating an item with similar items. However, items may not be so easy to classify in terms of predicting how users might seek them. For instance, in Hypothetical University Library, an animated film such as “The Lorax,” based on the book by Dr. Seuss, might be shelved with all other animated films under Library of Congress Classification system call number NC1766. Though this makes sense, a user might, quite logically, search specifically for films on the topic of conservation of natural resources. Though “The Lorax” addresses this theme, if the user were to browse the shelving area where films on conservation are located at Hypothetical University Library, they would clearly miss “The Lorax.”

This is where assignment indexing comes in handy. Assignment indexing is the practice of “tagging” bibliographic record s (in a modern online library catalog, a bibliographic record is a web page describing an item and providing its shelf location or, if it is an electronic resource, a link to its virtual location) with subject headings from a standardized list of descriptors such as the Library of Congress Subject Headings or Sears List in order to create multiple subject access points. According to the Online Dictionary for Library and Information Science, an access point is “a unit of information in a bibliographic record under which a person may search for and identify items…” [7]. Often times, catalogers will assign multiple subject access points in the form of subject headings to a bibliographic record in order to accommodate a variety of approaches to searching for an item. For example, a bibliographic record for “The Lorax” may, in addition to “conservation of natural resources–juvenile films” contain the subject heading “pollution–juvenile films” just in case users decide to search using the term “pollution” instead of “conservation of natural resources.”

Subject headings are an example of a controlled vocabulary . By agreeing to use a controlled vocabulary, or standardized list of terms, in order to “tag” items, catalogers enable searching across multiple databases or library catalogs simultaneously. Since many libraries that own “The Lorax,” for instance, are likely to use the pre-determined Library of Congress Subject Heading “conservation of natural resources” to index this, and similar items, it is possible to target these items with a subject search across the holdings of multiple libraries. In addition to enhancing access to items by allowing users to search for them in multiple ways, subject headings in a modern online library catalog enable hypertext cross-indexing . While viewing the bibliographic record for an item, such as “The Lorax,” users may navigate to similar items within a catalog or database by clicking on the subject headings that are attached to that record. Clicking “pollution–juvenile films,” for instance, would produce a list of items sharing that descriptor, such as “Bill Nye the Science Guy Pollution Solutions.”

Though they have been refined substantially in the last 150 years, it is worth noting that systems of collocation and assignment indexing date back to ancient libraries. For example, the collections of Assyria n king Assurbanipal (see Fig. 12.5) , who ruled from 668 to 627 B.C.E., consisting of thousands of clay tablets (upon which were inscribed some of the dire threats against irresponsible borrowers mentioned earlier in this chapter), were collocated by means of a relatively complex scheme. One room of his palace contained tablets relating to government and history [5]. Other portions of the collection divided up by subject included geography, laws and legal decisions, legends and mythology, and commercial records [5]. Within each room, a shelf list detailing the titles of works contained therein was affixed to the wall [5]. In addition, tablets that were analogous to a subject catalog or descriptive bibliography were found in the rooms. Each of these special tablets offered descriptive details about the other tablets contained in that room including titles of each work, the number of tablets for that work, the first few words, the number of lines, and symbols indicating location or classification [5].

Fig. 12.5
figure 5

Stele featuring sculpture of Assurbanipal [3]

The Library of Alexandria , which was founded in approximately 300 B.C.E. serves as another early example of advanced library organizational systems. Callimachus of Cyrene , a scholar at Alexandria, can be considered one of the early pioneers of assignment indexing . Among his many contributions, Callimachus enhanced access to the alphabetically-ordered collection of the library, which was comprised of hundreds of thousands of works on papyrus rolls. He did so by compiling shelf-lists and bibliographical works including Tables of Persons Eminent in Every Branch of Learning Together with a List of Their Writings, a survey of all Greek writings that was so extensive that it was comprised of five times the number of volumes that contained Homer’s Iliad [2]. Callimachus broke the authors featured in this work into broad genre categories and then made finer distinctions from there, grouping them by their literary specialty: dramatic poets, epic poets, philosophers, comedy writers, historians, etc., [2].

4 Floundering in a Sea of Information: The Web and Information Literacy

In libraries during the early to mid-1990s, the use of print indexes declined sharply as CD-ROM and web-based databases greatly expanded access to metadata and digital content such as full-text versions of periodical articles. This marked the beginning of a period of widespread outsourcing of digital collections and a relinquishment of the meticulous level of control over the selection process that librarians had exercised over physical collections. Prior to this point, though a relatively small number of online research databases had been available before the advent of the World Wide Web, the bulk of a library’s holdings had been limited to what could be stored within the walls of library buildings. Many, if not most, of the items in those buildings had been vetted carefully by librarians with regard to accuracy, authoritativeness, or other quality-oriented collection development criteria. The inclusion of these massive subscription-driven databases , each containing, potentially, tens of thousands of records along with articles from hundreds of periodicals has made it unfeasible for librarians to continue to apply rigorous selection standards to each and every item in a collection. Furthermore, after those databases are acquired they continue to morph as content is added or subtracted by the database provider.

In terms of content newly available to users, library databases are just the tip of the iceberg. By 1994, with access to user-friendly, web-based search engines and web indexes such as Yahoo!, Lycos, and Infoseek, researchers and casual users alike had expanded their reach beyond the walls of physical libraries via the World Wide Web. Presented with information in new formats that had not been pre-selected by librarians or vetted through established publishers, many struggled to distinguish between reliable and unreliable content, lack the savvy to formulate search strategies that would help them manage the overwhelming number of search results they were presented with. Stoker and Cooke summed up the problem in 1994:

Information posted on to the network does not go through the same rigorous review procedures as information which has passed through formal publishing channels. The facility has been described as ‘clogged with too much junk to make its use effective’ and the information ‘ephemeral and of questionable quality…’ On occasions it might be difficult to determine the originating institution or individual for an item [8].

In a 1998 survey, the Pew Research Center determined that 41 % of adults were using the internet, up from 23 % in 1996 [9]. Despite the potential pitfalls of using the web noted by Stoker and Cooke four years earlier, in 1998, 49 % of web users believed “that Internet news is more accurate than news found in traditional print and broadcast outlets” [10]. Around that same time, some researchers discovered that this user confidence in the web may have been unwarranted: an analysis of 41 web pages offering health advice concluded that “only a few web sites provided complete and accurate information” which indicated “an urgent need to check public oriented healthcare information on the internet for accuracy, completeness, and consistency” [11].

The results of another study in 2000 indicated that consumers of web-based information either lacked the skills to evaluate the reliability of websites or were relatively unconcerned about its origin or trustworthiness. In the study, nearly 1000 respondents were asked to rate how often they applied basic criteria for evaluating the validity of websites such as “check to see who the author of the website is,” “consider whether the views presented are opinions or facts” and “consider the author’s goals/objectives…” [12]. Mean response scores for all but one of the nine criteria fell between values used to indicate a frequency of “rarely” and “never” with regards to applying each of the criteria [12].

The problem persists. By 2010, 79 % of American adults had become internet users [13] and in 2012, the Pew Research Center published the results of another survey indicating that many of them may be generally uncritical of websites appearing in search engine results. The survey concluded that “roughly two-thirds of searchers (66 %) say search engines are a fair and unbiased source of information.” 28 % of respondents indicated that “all or almost all” of the information they get in their search engine search results is “fair and trustworthy” and an additional 45 % indicated that “most” is “fair and trustworthy” [14]. However, despite this high degree of confidence in search engines, “four in ten searchers” said “they have gotten conflicting or contradictory search results and could not figure out what information was correct. About four in ten also…” said “…they have gotten so much information in a set of search results that they felt overwhelmed” [14].

Assisting clients with internet use has been a major component of many librarians ’ job duties for nearly two decades. As a result, they have been first-hand witnesses to users’ struggles with the relatively new responsibility of evaluating documents and sites they encounter on the web. Critical thinking about the origin of sources, about the publishing process, and about the appropriateness of a source in terms of meeting an information need have always been a part of doing research, regardless of whether information is located on the web or in print. However, the challenge of determining the reliability of web-based information requires a new set of critical thinking skills to be applied in new contexts. As the Association of College and Research Libraries states, the “sheer abundance of information will not in itself create a more informed citizenry without a complementary cluster of abilities necessary to use information effectively” [15].

In order to address the need for these skills, many librarian positions now emphasize teaching as a major component of the job. Before the advent of the world wide web, typically, librarians provided “orientations” or bibliographic instruction geared towards using card catalogs or online public access catalogs and navigating the collections which they had carefully vetted for reliability. Over the last 15 years, however, librarians have shifted their efforts towards providing instruction oriented around deeper critical thinking skills often referred to as information literacy . Information literacy, as a skill set, is highly applicable to online environments as it empowers users to “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” [15].

Analyses of librarian job advertisements have reflected this shift towards a greater instructional role. For instance, in 2002, 54.6 % of librarian job descriptions examined on an international job posting website over a 3 month period indicated that “user education or training is an important part of the job” [16]. In 2013 another study was published that gathered data from supervisors at organizations that had posted librarian job announcements on the American Library Association ’s job website. The study concluded that for 65 % of the jobs, instruction skills “were a required qualification.” For an additional 34 % of the jobs, instructions skills “were a preferred qualification.” Only one response did not list instruction as belonging to either category [17].

Librarians teach in a variety of contexts, including credit-bearing university courses, public library workshops, and in online environments. Some also consider one-on-one interactions with users at the reference desk or elsewhere to be an extension of that teaching role. Regardless of the context, by re-envisioning their profession and adapting to their clients’ needs, librarians are empowering users to become critical consumers of information.

5 Library 2.0 and the Rise of Next-Generation Library Search Interfaces

The Web 2.0 movement that began in the early 2000s has been characterized, in part, by a shift from static web pages to interactive pages, platforms, and applications that enable users to contribute and collaborate in a variety of ways. This user-oriented approach to design has also extended to providing a simple, intuitive, and streamlined online experience. Early social media sites such as Friendster , photo sharing site Flickr , and social bookmarking sites such as Del.icio.us were pioneers of the phenomenon that has changed drastically the way we communicate and engage with information. Web 2.0 also put publishing in the hands of the masses; without knowledge of web programming languages, many users were able, for the first time, to shape the content of the web by using wikis , blogs , and simple web-page creation applications such as Google Sites . Web sites began to invite users to comment and rate content, or even to enhance access to that content via tagging, a crowd-sourced form of assigning subject descriptors that is also known as folksonomy .

Since the advent of Web 2.0, libraries have followed suit by enhancing the interactive capabilities of their websites and contracting with vendors who specialize in incorporating dynamic and interactive capabilities into library catalogs. As a result, navigating the online presences of most libraries has become a more participatory experience for users. Access to library resources and services via online public access catalogs (OPAC S) has improved drastically over the traditional catalogs in use prior to what is often referred to as Library 2.0. Traditional, or “legacy” catalogs, according to renown library technology consultant Marshall Breeding in 2007, were overly complex, lacked engaging features, and were “unable to deliver online content” [18].

The ideals of Library 2.0 were epitomized by information architect Casey Bisson ’s development of a library OPAC overlay interface (which works in concert with an existing OPAC, rather than replacing it) based on the popular open source WordPress blogging software. The project, called Scriblio , was born out of Bisson’s conviction “that libraries must use, expose, and make their data available in new ways” [19]. The use of the WordPress platform brought library catalog records up from the deep web where they had long been buried, making them discoverable via search engines and therefore, indexable by users of social bookmarking services. Scriblio, originally called “WordPress OPAC ,” which was announced on Bisson’s blog in early 2006 [20], offered several capabilities beyond those available from traditional OPACs in use at the time. Among those improvements were faceted searching (options for limiting or refining one’s search after the initial query has been submitted) and browsing via tag cloud s. Within catalog records displayed in the Scriblio interface, similar items were suggested and accessible via hyperlink. Users were also able to comment on catalog items, and by subscribing via RSS, they could receive automatic updates detailing changes to the catalog [21].

Open-source integrated library systems , or ILS s, (library management software which includes the OPAC) such as Evergreen offered similar enhancements to the traditional OPAC. Despite the improvements they brought and the fact that they were free, open-source catalog overlay interfaces such as Scriblio, and open-source OPACs have been adopted by relatively few institutions. This is possibly due to the fact that some libraries may be daunted by the prospect of limited technical support for such products (being largely community-based, rather than provided by the vendor) and their reputation among some for having the buggy aspects of a beta quality platform [22]. Furthermore, few libraries have the type of in-house programming expertise that Lamson Library at Plymouth State University, which employed Casey Bisson , did. Another possible reason is that by 2007, commercial ILS vendors such as Polaris Library Systems , OCLC , and Innovative Interfaces, Inc. had taken notice of these “next-generation OPACS” and enhancements developed by Library 2.0 pioneers like Bisson, and had scrambled to improve their own OPACS [23] by adding dynamic features or by offering new products altogether.

Additional Web 2.0 functionalities to existing OPACs were offered by a variety of third-party developers such as Library Thing for Libraries (see Fig. 12.6) , a commercial service which incorporates some navigation features similar to those of Scriblio and has evolved to offer users the ability to rate, review, and tag items displayed in libary catalogs. By incorporating third-party enhancements such as Library Thing for Libraries and overlaying user-contributed content over the elements of the traditional library catalog, the next-generation OPAC has become the “mash-up ” of the library world. Since the advent of the next generation catalog, this theme of integration in library search interfaces has expanded much further along towards realizing Marshall Breeding ’s position that, in “an ideal world, the content of all the library’s collections would be available through a single search interface” [18].

Fig. 12.6
figure 6

Search results processed by a next-generation catalog that incorporates Library Thing for Libraries. Note the faceted search options on the left for narrowing the list of results by a variety of criteria, including user ratings

6 Integrating the Search Process

With the growth of the internet, electronic scholarly journal publishing has also seen an explosion in prevalence. Libraries increasingly license e-journal content (in packages from publishers, single titles, or, most commonly, via database aggregators). E-journal packages and databases allow libraries to dramatically increase the depth and breadth of content available to their patrons—usually at a fraction of the cost of subscribing to or purchasing titles individually. As library patrons experience improved access, they also come to expect that access to be to the digital form of an article—not a physical copy they must locate on a library shelf. But as access has expanded, the need has grown to enable even greater access (to referenced articles in an article of interest or to the full text of articles with citation and/or abstract information in a particular database).

7 Search Process: OpenURL Resolvers

In the late 1990s, OpenURL Resolvers (also referred to as link resolvers) entered the scene to address these desired research enhancements. While at first not much more than static links to articles on a publisher’s web site, link resolvers soon developed a standardized syntax that allowed for metadata (information about the journal’s ISSN, title, article title, author, volume, date, and page numbers, etc.) to be passed from links in one database or platform, query a “knowledge base ” provided by an OpenURL vendor to which the library subscribes, and into the full text content, which could reside anywhere else within the library’s e-holdings. The first commercially available link resolver, SFX , was released in 2001 by Ex Libris [24]. Through a subscription to this product, libraries could provide information to a vendor about the e-journals, databases, and e-serials packages to which they had access. Ex Libris would then coordinate with the vendors to maintain updated title and date coverage lists within a knowledge base to ensure links reached their appropriate targets [25]. Soon other providers began offering these services. Some examples include EBSCO ’s LinkSource, Serials Solutions ’ (later acquired by ProQuest ) 360Link, and OCLC ’s WorldCat Knowledge Base. Figure 12.7 provides an example of an interstitial OpenURL results screen with both article-level and journal-level links available.

Fig. 12.7
figure 7

Example of an OpenURL results screen with links to content

By the time Google Scholar launched in November of 2004, libraries were able to work with their OpenURL provider to send their holdings information to Google. This resulted in libraries being able to connect with more potential users who may have been starting their research with Google instead of library resources. Allowing for IP-authentication, all users on an academic campus searching Google Scholar would automatically see information about connecting to their results via their library resources right from the search results list. Libraries could configure the text of the link as well. While this process has hardly been foolproof, as it relies on webcrawler-indexed metadata on Google’s side matching up with metadata supplied by content providers; it has provided a way for libraries to link their holdings up using OpenURL technology with what their patrons were locating on the open web (and may otherwise have been prompted to pay for on their own). Figure 12.8 demonstrates the search results screen a user might see in Google Scholar if his/her library has sent their holdings information to Google.

Fig. 12.8
figure 8

Example of a Google Scholar results screen where a library has sent their e-serials holdings to Google

OpenURL resolvers have not been without issue, however. They can be expensive—beyond the reach of a small library’s budget—further exacerbating digital divide issues, where library users in smaller communities with less well-funded libraries then do not have access to technology that aids in their discovery of and connection to information. Also, since the success of a link can depend upon the complete matchup in metadata between the provider hosting the content and the provider indexing the content (frequently two different vendors ), false negatives and positives can often result. That is, the OpenURL resolver may return information stating that the library does not have access to an article that it actually does have access to. Or, conversely, the resolver may link to a database where it states the article should be found, but the library does not have access to that article via any of its subscriptions. Understandably, this can be confusing to users.

In a 2010 study, two librarians found the mean total success rate for SFX (across links to books, newspapers, dissertations, and journal articles) was only 71 % [26]. This causes great frustration for librarians who will often be referred from the technical support desk of the indexing vendor to the tech support of the content-provider to the tech support of their OpenURL /Knowledge base provider. Full resolution may take days, weeks, months, or not come at all, and librarians new to e-resource management may be confused about where to begin. Some of this can also be the fault of the knowledge base vendor—who may have neglected to add, delete, or modify coverage dates of a title residing within a particular database.

The need to maintain an updated knowledge base cannot be under-stressed. OpenURL vendors must continuously update their information and libraries must also remain vigilant whenever they add or subtract from their e-collections or when a collection changes platforms or title. Failure to do so results in broken links for patrons. Additionally, some publishers, as a rule, do not allow for links directly to the article level. They may stop at the issue level or even the journal title level in an attempt to encourage libraries or end users to pay for subscriptions (electronic or print) directly to their journal titles. Often, librarians will not be aware of which vendors have these practices until they or their patrons encounter problematic links. Price and Trainor (2010) encourage libraries to thoroughly review content providers in order to have knowledge of which do not allow article-level links [27].

8 Search Process: Federated Searching

While OpenURL resolvers do allow for communication between databases, library patrons increasingly expect that their searches will return all results held by the library—not just some. Federated search (or metasearching) arrived in the marketplace in the late 1990s/early 2000s with what librarians hoped would provide a Google -like experience for end users [28]. Federated search claimed to make the idea of a “one stop shop” for searching all library resources a possibility. Users do not want to become experts in the various interfaces employed by library databases. Federated searches appeal to the novice user and the experienced user alike [29].

However, the way federated searching and Google work are entirely different. Through automated web crawling, Google is able to pre-index website content, returning results very quickly when users search. There are limitations, however. Google cannot search the deep web—content within subscription databases, data sets stored as files on government websites, orphan resources that are not linked to from anywhere else, dynamic content generated on the fly, and other resources to which libraries and librarians can provide access [30]. Federated searching, on the other hand, sends out queries to multiple databases (often including the library’s online public access catalog, or OPAC) which are maintained by different vendors on different platforms with different indexing and different types of search protocols (XML—which uses a type of tagging of search elements—vs. Z39.50—a library-specific search protocol developed before the web—vs. the federated search vendor cobbling together a search strategy to access diverse resources) [29].

So, while a library’s federated search product will return items on the deep web (indexed by library-subscribed resources) that are unavailable or largely invisible to Google , it also will do so at a much slower speed. Users may get a Google-like single search box, but the results will not populate their screen instantaneously as with Google. Instead, the federated search calls out to databases separately and returns results separately, as they are retrieved from the native databases. This leads to a list of non-ranked, non-de-duplicated results. Librarians may understand that these results need to be combed through carefully, but end users are used to the most relevant results showing up at the top of the screen. That might not always happen with federated searching. Another limitation is the fact that most federated searches are most effective when searching no more than a dozen resources [31]. When most large academic libraries subscribe to 60, 90, or over 100 electronic resources, patrons certainly are not getting a true “one stop shop.”

How do users feel federated search compares to the Google experience? A 2013 study by Helen Georgas offers a comparison. Undergraduate students at Brooklyn College were asked to find one book, two articles (one scholarly) and one additional source of their choice using two search tools—a federated search and a Google search. The federated search tool was configured to search 11 databases including the library’s OPAC. 81 % of students said Google was easier to use, with one commenting, “because it was faster.” When asked which search they liked better, the students were evenly split between Google and the federated search. 59 % of the students said they would use the federated search tool on future assignments and 56 % would recommend it to fellow students. Among their complaints about the federated search, students felt it was difficult to find books and too slow overall. Their complaints about Google related to finding too many irrelevant results and being asked to pay for content. Some students mentioned they wished the library had the federated search, when in fact, the library had subscribed to the service for years. Students also remarked that they had difficulty identifying the types of sources retrieved in the federated search. Whereas Google and Google Scholar identify results by type of material, the federated search simply tags results with the database from which they were retrieved (although librarians felt the type of information was fairly obvious based on the fact that the OPAC results were all physical items and mostly books). These findings point to the need to make sure library patrons are more educated about services and features [32].

Despite the fact that they enable discovery of quality sources and students find them useful, it is obvious that federated searches have their serious limitations. First, the speed of the service is dictated by the slowest-performing of the remote database connections. Similarly, the fastest-performing remote connection will always have its results listed on top—leading to a potential problem of falsely-perceived relevance. Large result sets (as would typically result from a broad search by a novice user—the very type of user and search for which federated search was developed!) cause problems. Due to the time involved in retrieving these large remote result sets, results are typically truncated by the federated search service and any de-duplication or relevance ranking within results sets is then performed on only a small subset [31].

From the librarian’s perspective, implementing a federated search product can be frustrating—taking months to launch. And what has been billed as the one stop search is most often far from it. In addition to the fact that the federated search works better when no more than a dozen resources are selected, there is the issue of some vendors refusing to participate in federated search development—rendering their content invisible to federated search users. And because federated search requires some translation across database collections, if a vendor is slow to develop or fix that translator, resources on that platform may be excluded as well [33].

Furthermore, with so much reliance on one product (the federated search ) needing to utilize the different types of indexing employed by the disparate content vendors , it is very difficult to make use of database limiters, truncation, or wildcard searching effectively. Different databases may use these advanced search tools differently (or not offer them at all). Attempting to search across different platforms limits the functional search tools to the lowest common denominator across the databases. If search settings in the federated search product are adjusted to get better results from one particular database, the rest of the results may suffer. As Jody Condit Fagan (2011), the editor of the Journal of Web Librarianship, puts it: “Who knows if bad results are from the databases searched, the federated search software, or one’s own search strategy? Results are messy and duplicative, and users frequently can’t tell what the items returned actually are” (p. 77) [34]. So what do librarians do with all of the resources that cannot be included or searched effectively in a federated search? They are back to needing to teach users how to search all of the various interfaces individually and select the best resources for an information need (if those users can even find the resources within the depths of the library’s website first!).

9 Search Process: Web-Scale Discovery

So, then, to truly move into the Google -like search realm with better speed and more reliable and customizable results, a centralized search model needs to be in place. This is what a few vendors began doing next, and in 2009, Serials Solutions (now part of ProQuest ) was first to the library market with their launch of the Summon discovery service. Web-scale discovery services are the next generation in library resource searching [31].

Unlike federated searches, discovery services return results quickly and in relevancy-ranked order. Once results are returned, the discovery layer (or search interface) allows the user to refine and sort results using facets (e.g., year of publication, author, language, subject, publication type, or database source). The user is linked to full text via either direct links (if the resource is also hosted on the discovery service vendor’s platform) or using OpenURL technology.

This model scales well to the size of the web because content and metadata have been indexed in advance of a user’s search. With the increased capacity and reduced cost of data storage, the creation of this type of a centralized index (which is at the heart of all web-scale discovery services) became possible [31]. Within the central index are both the library’s local resources and licensed e-content. The library works with the vendor to load its OPAC records into the centralized index (for information about items held physically in the library). Along with this type of local content, libraries may also include metadata for institutional repositories of student and faculty work and/or locally digitized collections. On the more external side, metadata and full-text content from licensed and open access publishers and content providers can be selected for inclusion. Many discovery services have also licensed content from third-party vendors for inclusion in their central index, regardless of whether the library subscribes to that particular resource on its own. Content available to the library through subscriptions to database aggregators (e.g., APA’s PsycNet, ProQuest ’s Research Library, EBSCO ’s Business Source Premiere, etc.) may also be included. However, this type of content needs to be mutually licensed by both the library and the discovery vendor. Since many of the discovery vendors are also in the field of licensing e-content and providing access on their own proprietary platforms, they may choose not to make the metadata for and links to these resources available to other discovery vendors. In this way, not all of a library’s e-resources will necessarily be available for inclusion in the centralized index of their discovery service [35].

Because the content is pre-indexed, all of the advanced search options frequently unavailable in federated searching are available to the user of a discovery service. Truncation, wild card, exact phrase searching, and use of Boolean operators are all possible. While discovery services all have these basic characteristics in common, there are differences among them. There are several vendors in the marketplace at this point. Perhaps the four with the largest market share are Summon (formerly launched by Serials Solutions , which has since been bought by ProQuest ), Ex Libris ’ Primo Central , EBSCO ’s EBSCO Discovery Service (EDS), and OCLC ’s WorldCat Local (although they also have a new WorldCat Discovery product just launched in March of 2014).

Summon bills itself as: “the only discovery service based on a unified index of content. More than 90 content types, 9000 publishers, 100,000 journals and periodicals, and 1 billion records are represented in the index. New content sources are added every week and content updated daily.” With their “Match and Merge” technology, Summon ingests content from various providers, “combin[ing] metadata , including discipline-specific vocabularies, with full-text content when available to create a single record” for each resource [36]. Figure 12.9 shows an example results screen from a search in Summon. More information about items in the results list (abstract, authors, dates) is also shown in the right margin when hovering over a particular result.

Fig. 12.9
figure 9

An example of a search results screen in Summon , a web-scale discovery service

EBSCO , which is also a major content provider and has established relationships with diverse publishers, is able to leverage its existing resources to include native database indexing (which is frequently performed by subject experts in the field for inclusion in an individual, subject-specific database and adds value) and subject-specific controlled vocabularies in its discovery service [37].

OCLC has a unique position in the marketplace as it sees itself as “content-neutral.” Having gotten out of the business of hosting third-party databases, it claims to be able to build relationships with a larger variety of content providers more easily [38]. And certainly, this is an issue. Some database vendors are also in the web-scale discovery business and do not wish to provide all of their indexing or content to competitors. For instance, EBSCO currently refuses to provide its content to Ex Libris for inclusion in Primo Central [39].

Libraries have had to develop their own awkward workarounds, and in the end, patrons are not served well. This debate has been well-documented and brought to public attention by the Orbis-Cascade Alliance [40], a nonprofit library consortium of 37 colleges and universities in Oregon, Washington, and Idaho. In a letter from the alliance dated October 6, 2014, to both vendors , regarding their failure to resolve the stalemate, the Orbis-Cascade Board of Directors state: “This failure to act is unacceptable and strongly suggests that both companies value business gamesmanship over customer satisfaction and short-term gain over service to students, faculty, and researchers. The library community expects an explanation and we call upon EBSCO and Ex Libris to provide a public update and projection of when this impasse will be resolved. As a major customer, the Orbis Cascade Alliance membership expects to spend in excess of $30 million with EBSCO and Ex Libris over the next five years. With these issues left unresolved, we will now take active steps to reconsider the shape and scope of future business with EBSCO and Ex Libris.” [41].

Discovery services have been very popular upon implementation. In a January, 2014 survey of nearly 400 libraries using discovery services, overall satisfaction with the products ranged from 6.26 to 6.95 on a 9-point scale. Marshall Breeding [42] found that overall satisfaction was highest with users of EBSCO Discovery Service and lowest with Primo Central . Interestingly, all discovery services had higher popularity scores among undergraduates than among graduate students or faculty. This could be in part due to issues with known-item searching. Faculty and graduate students are more likely to be searching for a specific resource (a journal article, book, image), and discovery services are better at exposing a large range of resources to the searcher.

Web-scale discovery still remains out of the budget range of many libraries. A 2010 review of Summon , EDS, and WorldCat Local published in The Charleston Advisor [43] described the pricing of these services as ranging from $9000 to over $100,000 per year depending on the size of the library’s collection, size of population served, and optional add-on services (incorporating institutional repositories, enhanced book content, building connections to additional resources not included in the provider’s central index). Despite this, it could be argued that the cost/benefit ratio is in the favor of acquiring a discovery product. Users finally do get closer to utilizing a single search, and the library’s e-resources receive greater exposure and usage. Discovery services are also generally mobile-friendly and can incorporate most, if not all, of the content of a library’s OPAC. Additionally, because these services are hosted by the vendor, libraries do not need to worry about server or software upgrades [44].

So, what’s next? Is there territory beyond web-scale discovery ? Certainly discovery services are continuing to improve. Librarians need to remain closely involved in the development of these tools—making sure to customize library products to best meet the needs of the type of users they serve [39]. With the move to a single search portal, librarians may be able to devote more time to the development of local “born digital” collections and institutional repositories—and utilize the discovery service as a way of making that content more visible to the end users. Discovery services may help librarians stay more current and relevant in the eyes of patrons who are always expecting a Google -like experience, but education is still key. Users need to know the basics about evaluating information, considering results for relevancy, and identifying the types of information being retrieved. Librarians are experts in these areas.

10 Integrating Services: Library Consortia

As electronic resources available to librarians continue to increase in their depth and breadth of coverage and complexity of access models, libraries have turned to consortial models to help manage these workflows. Library consortia are not new. There is evidence of early consortial behavior back into the late 19th century as groups of libraries have banded together to share cataloging, participate in a very rudimentary form of interlibrary loan , and purchase cooperatively [45]. This section, however, will focus on how library consortia work today.

Libraries license content and/or platforms for access from vendors . Unlike books or videos the library purchases, these items are frequently leased, and not owned. As a result, they have a range of restrictions not found in the purchases of physical formats [46]. The license agreements for e-resources can be tedious, and individual libraries may not have the expertise to fully understand and negotiate these contracts in their own best interest. Concerns arise over which resources allow remote access to affiliated users only and which will allow more relaxed rules. There are also questions about what electronic content may be used to fill interlibrary loan requests from other libraries. Database license agreements are not consistent across the board, and e-resource license management can be overwhelmingly time-consuming if performed thoroughly. Most libraries do not have the staff to spare for this singular function nor the legal expertise to do this, and this is one niche that consortia have been able to fill.

Library consortia can negotiate with vendors on behalf of all of their member institutions. Some may have experts in license agreements on their staff or rely upon committees of librarians from member institutions to review agreements from newly-licensed content before offering them to member libraries for purchase. Consortia can also offer content to libraries that the individual libraries may not have been able to afford on their own. When purchasing together, consortia can purchase large e-journal packages for their member institutions. On a per-title basis, individual libraries are paying much less for these titles than they would if they purchased their own subscriptions on an a la carte basis. Consortia are also able to negotiate with vendors to suppress cost increases with more power than individual libraries negotiating ever could [47]. They can reject dramatic cost increases, object to restrictive licensing terms, and achieve better discounts overall. End users benefit because they have access to more expensive, niche resources.

In addition to performing cooperative purchasing and licensing of databases and other e-resources, consortia may also work together to offer interlibrary loan services. Some consortia, like OhioLINK (formed in Ohio in 1989 and composed of 90 public and private academic libraries plus the State Library of Ohio) [48] and the Orbis Cascade Alliance (comprised of 37 academic libraries in the Pacific Northwest, formed in 2003 from a merger of the Orbis and Cascade Alliances, which originated in the early 1990s), [49] have partnered with vendors to create consortial library catalogs. These catalogs enable borrowing and lending among member libraries in a way that is more seamless to the library patron (who may simply just place a hold with one click, rather than filling out an interlibrary loan form). In the case of the OCA, the consortium actually shares an integrated library system (ILS) which is responsible for not only the public catalog (OPAC) but also the back end staff circulation, acquisitions, reporting, and cataloging functions. This allows items to be checked out as if they were from one large library with many branches, as opposed to individually siloed libraries with their own ILS software, circulation rules, and processing procedures. And again, the library patrons benefit because a much larger array of resources is being presented for their use at a reduced cost and expedited processing speed [50].

Even outside of official consortial agreements (which may offer library patrons reciprocal borrowing privileges or fee-free interlibrary loans among member institutions), interlibrary loan has continued to rise in popularity. Discovery services present patrons with more results than ever before, and the sponsoring library will not own all of those items. Interlibrary Loan request links are placed prominently within non-owned search results allowing for an e-commerce-like experience for patrons who are used to purchasing items through Amazon with one click [51]. Figure 12.10 shows what this looks like inside the library catalog of a library using OCLC’s WorldCat Local . Generally, interlibrary loan is free for academic library users or there may be a nominal fee. Interlibrary loan allows libraries to provide access to content for their patrons for which they could not otherwise justify paying full price or even acquiring at all. The modern interlibrary loan framework was largely created by OCLC (formerly the Ohio College Library Center and now the Online Computer Library Center) with its WorldCat product. Beginning as the OCLC Online Library Union Catalog in 1971, it later developed into WorldCat (in 1996) and developed into the freely available and searchable WorldCat.org in 2006. By authenticating to their home library, patrons searching WorldCat can request items via its union catalog (which represents the holdings of libraries—both physical and electronic—all over the world) [52].

Fig. 12.10
figure 10

Example of a 1-click Interlibrary Loan requesting option within a library catalog

11 Integrating Services: Vendor Partnerships

Vendor partnerships can be valuable for libraries. Opportunities in this arena are increasing all of the time. One such relatively new development is the introduction of cloud-based, full-featured, integrated library systems (ILS). These new products are also being offered by vendors who, in the past, did not get involved with full library solutions. For example, OCLC introduced its WorldShare Management Services (WMS) in July of 2011. Now, over 300 libraries worldwide are using the service [53]. OCLC’s WMS includes modules for acquisitions, e-resource management (knowledge base , metadata , and OpenURL ), circulation, analytics, interlibrary loan , discovery (the system also acts as a web-scale discovery service), and an optional license manager for online resources. The price and work involved on the part of the library in migrating to a new cloud-based ILS like WMS is considerable. However, the library and the end users will benefit as eventually the library can reduce costs by consolidating all of these activities in one service.

Separate subscriptions with diverse vendors for managing e-resources, providing the library catalog, adding enhanced content to the library catalog (like book cover images and review information), offering web-scale discovery functionality, and performing all of the back-end tasks like circulation, acquisitions (the ordering, invoicing, and receiving of books) and serials management (the placement and monitoring of subscriptions and electronically checking in of individual journal and magazine issues) can all be cancelled once the library has fully migrated onto the new cloud-based platform. Libraries also no longer need to perform frustrating and time-consuming software upgrades and have custom reports rewritten every time a new version of a vendor’s ILS is released. There is no longer a need to maintain hardware within the library or within the university’s IT department. With cloud-based hosting, new releases are handled centrally and offer new functionality constantly. While this can be daunting (as at least one person in the library needs to keep up with the changes to the service), it ultimately offers the highest level of responsivity to trends in information-seeking and provision. Similar products from other vendors include Ex Libris ’ Alma and ProQuest ’s Intota (which is still in development, with its collection assessment piece launched in November, 2013).

Some legacy ILS vendors have offered products in response, but they generally exist as optional overlays to the existing system or are built upon existing ILS infrastructure with some consolidation of services and automation and discovery products offered by them or their partners. These vendors tout the fact that libraries can continue using the product they have always been using that contains all of their data with no need to migrate information to a new and “untested” system [54]. These new services from legacy vendors can be offered as SaaS (software as a service), so they can be fully hosted. However, the new products from legacy vendors will need to be implemented within the current hosting framework (with some libraries hosted in the cloud, some via SaaS, and others locally hosted on their own hardware). This means the legacy vendors will likely need to support multiple versions of their new ILS. In contrast, the built from scratch systems are not based on old legacy code, and updates, patches, and bug fixes can be pushed out to all users from the development side simultaneously [55].

End users are frequently unaware of these behind the scenes machinations on the part of their libraries. However, moving to new cloud-based ILS platforms can result in new workflows and reallocated time on the part of library personnel. With WMS, for example, cataloging is much quicker—with librarians just needing to select the appropriate master record within WorldCat and attach their institution’s holdings to it. Libraries can cooperatively manage cataloging by contributing updates and corrections and additional information that will be added to the master record for the benefit of all libraries. With this reduced need for copy cataloging time, librarians working as catalogers may be able to devote time to other projects like cataloging a unique local collection, or advising on the virtual construction of an institutional repository.

12 Integrating Services: Cooperative Reference Services

Library services extend beyond the discovery of information and provision of access. Reference , which is at the core of library service, has also been impacted by collaborative management. The American Library Association defines reference transactions as “information consultations in which library staff recommend, interpret, evaluate, and/or use information resources to help others to meet particular information needs.” [56] Traditionally, this has taken place in person at a reference desk in the library or via telephone.

According to a 2012 survey by the National Center for Education Statistics , 74.9 % of all US academic libraries offered some form of virtual reference as well, with 26.6 % offering chat reference via a commercial service and 32.8 % offering chat reference via instant messaging applications. This was an increase from 2008, when 72.1 % of all US academic libraries offered virtual reference. Academic libraries serving larger populations are more likely to offer some form of virtual reference [57]. These trends are similar for public libraries [58].

Libraries willing to go it alone have used free instant messaging software like Yahoo chat, Google Chat, and MSN Messenger. However, those services are not provider-neutral, so chat aggregators also became popular in libraries (Pidgin , Meebo ). They allowed librarians to receive chat questions from library patrons using any chat software. However, as stand-alone services, only one librarian could monitor the chat queue. Additionally, if a librarian needed to transfer a chat reference question to another librarian, it was quite difficult [59].

In 2008, LibraryH3lp was launched as part of a chat reference collaborative for reference services provided after-hours at Duke University, UNC Chapel Hill, and North Carolina State University. It utilizes the open standard XMPP as its chat protocol, which allows for monitoring of the chat queue by a number of free clients, such as Pidgin . LibraryH3lp widgets can be inserted into websites, databases, discovery layers, or subject guides (like LibGuides ) for patrons to access at their point of need [60].

For libraries that want more extensive, 24/7 virtual reference coverage, there are other (more expensive) products in the marketplace. OCLC offers QuestionPoint , which is a reference cooperative staffed by librarians from subscribing libraries. During overnight hours, contract librarians with access to the home institutions’ basic information about policies and resources, staff the service. This is important to provide, as many reference questions fall into the basic informational variety (open hours, directions, information about library fine policies, etc.) Libraries using QuestionPoint are responsible for staffing it for their users as much as they’d like as well as for providing coverage a few hours each week to the entire cooperative [61]. When chat questions are submitted by patrons, they are first routed to the home library. If there is no response, they are then sent out to the library-defined partners (which could be a consortium or a network of state universities). If there is still no timely response, the question is sent out to the main QuestionPoint cooperative where any librarian staffing the service may respond to it [59].

By the end of 2012, approximately 24 % of academic libraries offered some form of text (SMS) reference service. With this type of service, library users can send a text message asking for reference assistance. Librarians receive and respond to the text via a web interface (and do not need to monitor it on their mobile phones). From 2010 to 2012, text reference service in large public libraries (those serving 500,000 people or more) increased from 13 to 43 % [62]. Springshare , which created LibGuides , added text reference capabilities to its LibAnswers suite via an add-on LibChat module [63]. Mosio for Libraries offers a Text-a-Librarian service as well as chat and email virtual reference capabilities. My Info Quest , a text reference cooperative sponsored by the South Regional Library Council of Ithaca, NY, uses Mosio’s Text-a-Librarian product. It is currently staffed 80 h per week with plans to increase coverage as more libraries join the service. Text messages may be sent via LibChat or Mosio’s product but will not get answered until a librarian is able to respond. Using a cooperative like My Info Quest allows quicker response to patron reference needs [64].

13 Integrating Services: Instant Information for the End User

In this world of streaming video via Netflix and Hulu and streaming music like Spotify and Pandora, library patrons also want that type of instant access to information and entertainment from their libraries. Libraries and vendors have responded with the development of e-book and e-audiobook platforms like OverDrive , which are readily integrated across a patron’s devices via an app.

OverDrive , which started out in the CD-ROM industry in the 1980s, first began offering downloadable e-books and e-audiobooks to libraries in 2003 [65]. Libraries can select from OverDrive’s catalog of content and offer what they choose to their patrons. Content can be integrated into the library catalog and/or searched separately on the library’s website. Additionally, patrons can download the OverDrive app to their mobile device or tablet and search directly from within that interface. Library patrons need to authenticate with their libraries by providing their library card credentials in order to use the service and view the library’s catalog of OverDrive content. Within the OverDrive app, patrons may use the native reader to read an e-book, place holds on titles, or follow a link to check out the Kindle edition directly from Amazon . Libraries are not restricted by physical shelf space, and as such, may choose to buy 5, 10, 15, 20, or even 50–100 copies of popular titles to minimize patron wait times. Some titles may offer libraries the option of purchasing unlimited simultaneous usage models, but most are single-user, single-copy. OverDrive also contains e-audiobook content, which is playable directly within the app and cloud-based, so playback is synced across devices signed into the same account. At the end of the check-out period, the file (whether e-book or e-audiobook) simply expires from the patron’s device. There are no overdue fines for patrons, either! [66] Other vendors have also gotten involved in e-book and e-audiobook content. Notably, Axis 360 from Baker and Taylor (a company that started out as a book distributor), 3M Cloud Library , and RBDigital (from Recorded Books).

Books and audiobooks have long been a major brand of the library, but libraries have also begun to offer their patrons electronic access to movies, television shows, music, and popular magazines. Some of this content is downloadable, like music from Freegal . Freegal allows users to download a limited number of songs per week from Sony Music Entertainment and other labels with which they have made agreements. While copyright laws always apply, these downloads are DRM-free and can be played as mp3 files on any device and do not expire [67].

Zinio , a partnership with Recorded Books, (yet another traditional audiobook vendor!) is a service that provides downloadable popular magazines to mobile devices and tablets. It also has its own marketplace (which exists for a fee completely outside of libraries) to provide magazine subscribers access to an electronic copy of their subscriptions via the Zinio app. In May of 2012, they launched a digital magazine newsstand to libraries. Libraries can choose from a catalog of over 5500 titles in over 20 languages to make available to patrons. There are no limits for patrons, and files do not expire. They can remain on a patron’s device indefinitely. Libraries pay a tiered platform fee based on annual circulations as well as by title selected. Library Journal gave a 2012 price point of $6417 per year paid by the Chattanooga Public Library for access to 121 titles and the cost of the platform. Patrons read the magazines in the Zinio app, which provides high-definition, full-color pictures and interactive media elements [68]. The Zinio interface on a library’s website is shown in Fig. 12.11.

Fig. 12.11
figure 11

Zinio interface on a public library’s website

Public libraries have also begun to provide access to streaming content, which circumvents long file download times. Hoopla , a streaming service started by Midwest Tapes (which was also a traditional audiobook vendor), offers patrons access to streaming movies, television shows, music, and audiobooks. Libraries pay very little up front to use the service—instead offering the content to their patrons via their website or the Hoopla app. They can then choose to throttle usage to keep within their budget requirements. This may mean that popular titles are only available to the first X number of patrons wanting to access them per day. Subsequent patrons are told that the limit has been reached for the day, but to try back tomorrow. There are no wait lists. Libraries can also choose the loan period for all items [69]. Other streaming services include IndieFlix (for movies), Freegal , and OverDrive (which have both recently entered the streaming movie and television market),

While academic libraries do not usually offer such services to their patrons, students and faculty often need access to information not immediately owned or leased (in the case of database content) by their library. Traditional interlibrary loan has always been on offer, but it can take days (for journal articles), up to weeks for books or videos to be shipped to the borrowing library. For students with a paper due at midnight, that is just not a viable option! In the early 2000s, libraries tried to adapt to patron needs by initiating just-in-time purchases from interlibrary loan requests. Libraries can purchase materials directly from several vendors (Better World Books , Alibris ) directly within their interlibrary loan requesting modules. In this way, what began as an interlibrary loan request results in a fast purchase. When the item arrives, libraries may forego the usual processing (cataloging, covering the book, entering it into their ILS) and only do so when the book is returned by the requesting patron.

This model evolved even further. What is now termed patron-driven acquisition (PDA) has been applied most frequently to building e-book collections. This refers, in its simplest form, to the process of allowing library user requests and information seeking behavior to decide, in part, which materials the library acquires. Several vendors (Ebrary, eBooks on EBSCOhost, and E-book Library ) now offer mechanisms for patron-driven acquisition. Libraries set up a profile with the vendor based on the subjects, publishers, dates of publication, cost of items, authors, keywords, or a variety of other criteria for filtering the types of items that they would like to make available to their patrons. This results in a pool of possible items. Libraries do not purchase or pay for these items up front. In fact, outside of a vendor platform or hosting fee, this is otherwise free for libraries to set up. Depending on their preferences, libraries can request MARC records for this pool of items so that they can be added to their library catalog. Some libraries may choose not to do this and rather to make these titles available through a search interface provided by the vendor (in much the same way a patron would search a database). However, in this age of discovery , that is generally not seen as a best practice. In order to expose patrons to the entire pool of potential items, it is ideal to load them into the library catalog—dramatically enhancing visibility. For libraries using next-gen catalogs (like OCLC ’s WMS or Ex Libris ’ Alma ) or a discovery layer, these records can be “turned on” as part of e-resource management. When patrons search in these interfaces, results of items from the PDA pool are returned and appear to be owned by the library. Patrons can click and directly access the content.

Libraries are then only charged when patrons use the items in the PDA pool. This usage is triggered by differing thresholds based on vendor, but is usually along the lines of when browsing exceeds 5 or 10 min or when a patron attempts to download or copy and paste content from the e-book. Performing any of these tasks can either trigger an outright purchase of the title, or, if libraries choose, this triggers a “short term loan ” (STL) of a time period defined by the library (generally 1 or 7 days). After a certain library-defined number of STLs, a purchase of the e-book is triggered. The STL fees are not applied towards the purchase cost of the e-book. The purchase is at the full price. The library is billed for short term loans at a fixed percentage cost of the list price of the e-book. This percentage varies from vendor to vendor, but it is always less than the cost of purchasing the book. Recently, publishers (e.g., Taylor and Francis, Bloomsbury, Oxford University Press, Wiley, McGraw Hill, among others) have been raising short term loan prices, as they feel the STL model is not a viable one for them (largely due to the fact that libraries can decide how many short term loans can occur before a purchase is triggered). In the past, some STL prices were as low as 5 % of the list price of the book for a day’s usage. Some libraries were allowing 4 or 5 or more STLs before triggering an auto purchase of an e-book. Essentially, this meant they were perpetually renting books and buying very few titles outright. This is an area which has been generating much discussion among librarians. To accommodate libraries that object to the higher prices on STLs, some e-book vendors have put settings in place on their platforms to allow librarians to no longer allow STLs from any publisher charging more than a particular percentage of the list price for an STL. Libraries may also choose to trigger an auto purchase of an e-book sooner or to exclude particular publishers from their PDA program completely. Perhaps it is only when a critical mass of libraries take this more drastic step that publishers will rethink the viability of their current pricing models [70].

Libraries can also implement this PDA model with streaming video. Kanopy , originally an Australian DVD distribution service, offers this. Libraries can license individual titles or collections from particular producers and distributors as well, but their PDA model is a new one in the library market. Four views of a video (3 s counts as one view) triggers a license purchase of a video. One and three-year licenses are available [71]. In both the streaming video and e-book PDA models, libraries may choose to put a set amount of money on deposit with a vendor and run their PDA program until it is exhausted or pay as they go. Libraries may choose to mediate the PDA process (where patrons must request access) or let it go unmediated (the more popular choice). Best practices generally state that patrons should not be made aware that a PDA model is in place, as libraries want patrons to access and use the materials they need without thinking about how much that usage might be costing their library, which might cause patrons to alter their information-seeking behavior. Overall, patron-driven acquisition allows libraries to make a much larger pool of items visible and available to their users without having to pay up front or provide shelf-space. Also, the thinking is that if patrons choose the items, perhaps they will be utilized more than items selected solely by librarians or approval plans.

Academic library patrons also need rapid access to journal articles to which their library may not have access. The Copyright Clearance Center (the organization responsible for collecting payments from libraries for interlibrary loan usage of unsubscribed titles and which generally manages licensing of content) provides a Get it Now service [72]. This service can be integrated with a library’s OpenURL resolver or existing ILL workflow. Just like PDA for e-books and streaming video, it can be offered either mediated (much like interlibrary loan, but the article can be delivered immediately upon processing) or unmediated (through the library’s website). With Get it Now, the requesting library provides payment behind the scenes to the Copyright Clearance Center that covers the cost of access and copyright payments, and the article is delivered immediately to the patron. The library has the ability to set up restrictions in advance (e.g., no patron may incur more than a particular amount of costs via the service, only certain titles are available, price limits per article, etc.), but the process runs invisibly to the end user. Libraries pay less than the per-article cost charged by individual publishers on their websites to access the content, so money is saved as well.

All of these new developments in library services and products continue to change and expand rapidly. Librarians need to remain on the forefront of technology—knowledgeable about tools, products, and services that connect users with information. Similarly, they need to serve as experts in how information is created, evaluated, and disseminated. While books will most likely always be one of the library’s most well-known brands [73], provision of access to electronic content is at the true center of librarians’ work today.