Electronic Resource Management in Libraries

Author: C. Sean Burns
Date: 2022-08-13
Email: sean.burns@uky.edu
Website: cseanburns.net
Twitter: @cseanburns
GitHub: @cseanburns

This is a rough draft of a short book based on a series of lectures for my course on electronic resource management.

I plan to complete this work during the Fall 2022 semester.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Chapter 1: Electronic Resource Librarians

Chapter 1 is completed (8/20/2022).

The ERM Librarian

Introduction

In this section, I will:

  • provide examples of electronic resources,
  • frame the topic of this course, and
  • discuss the readings.

Examples

This semester we're learning about electronic resources and about how to manage them. Let's begin by outlining the kinds of things that are electronic resources. Karin Wikoff (2011) outlines the major categories, and these include:

Ebook technology is rather complicated and differentiated depending on the copyright status, the file type (PDF, ePUB, TXT, etc.), and the purpose or genre (textbook, fiction or non-fiction, etc.). In some cases, ebooks are software applications and not just plain or marked up text. They also vary by platform or the application used to interact with the text, each of which may offer different types of functionality.

Linking technologies allow users to begin in one search system, like a discovery service, which extends that query to other systems without requiring the user to initiate searches in those other systems. For example, a user begins a search in a library's discovery system, like UK Library's InfoKat (powered by Primo (Breeding, 2006)). InfoKat identifies multiple articles there related to the query, even though those articles are all located in other full text systems, like EBSCOHost, ProQuest, or JSTOR.

Framing

The print only era of libraries was difficult enough for many reasons, but managing print and using print resources was comparably a more linear process. Electronic resources have raised the stakes. That might be expected: civilizations have had 500 years to develop and solve print technology, yet we have had only about three plus decades of experience with digital technology. We are a long way off from stability, and many of the challenges and frustrations ahead of us are not simply technical but also social and legal.

As you may surmise from the outline at the top of this page, electronic resources are a major part of any library, whether academic, public, school, or special. The need to manage them with efficient work flows requires attending to many parts of a big system. This will be much of what we will discuss and learn about, especially because there is a lot of complexity in these systems and these systems have had a major impact on librarianship itself.

Our Readings: The nature of ERM librarianship

Our readings this week provide introductions to electronic resource librarianship and help frame this course.

A Specialist and A Generalist

The first article by Stachokas (2018) surveys the history of this specialist/generalist librarian role. Stachokas (2018) finds that the electronic resource librarian has their feet planted in technical services and collection development, that this requires a holistic understanding of the electronic resource work flow, of its embeddedness in the scholarly and library ecosystem, and that this division is leading to different areas of specialization: those who focus on "licensing, acquisitions, and collection development" (p. 15), and those who focus on "metadata, discovery, management of knowledge bases, and addressing technical problems" (p. 15). This rings true to me. In my own observations, I've noticed that job announcements have increasingly stressed one of the above areas and not both.

The Technical Communicator as the "Bridge"

In the second article, Hulseberg (2016) uses the field of technical communication (TC) to interpret the field of electronic resource librarianship. Hulseberg (2016) takes the view that an electronic resource librarian is, among other things, a technical communicator. This is much different from being someone who helps patrons with their technical problems. Rather, this is someone who completes advanced work in documenting and reporting technical processes.

Hulseberg (2016) highlights four important themes about ERM: the interesting themes to me are Theme one: Metaphors of "bridge" and "translator", and Theme Two: Collaborating in a web of relationships. When I was an undergraduate, I imagined the job that I wanted would be one connected people from different silos to each other and helping them communicate. It turns out that, under Hulseberg's (2016) view, electronic resource librarianship does this work, which is pretty cool. However, the other themes are just as important, and in particular, Theme Four, about jurisdiction, highlights one of the major disruptive acts on librarianship in the last thirty or forty years.

As an example, consider that most people, researchers and scholars included, use non-library provided resources to locate information. Additionally, more works, scholarly and non-scholarly, are freely and publicly available on the web, e.g., as open access (OA). This might mean that the library is becoming disintermediated as a result of people using non-library services, like Google or Google Scholar, to retrieve freely available works on the web, instead of at the library. As a result, what becomes of the core jurisdiction of the librarian? And of the electronic resource librarian, in particular? In concrete terms: a recent paper (Klitzing, 2009) reported that researchers state they use Google Scholar 83% of the time and EBSCOhost 29% of the time to find relevant material. That raises strategic and technical questions about today's role of the librarian and library in the scholarly communication system.

To License or Not To License

The third article, by Zhu (2016), places a different theoretical lens on what it means to be an electronic resource librarian. Zhu (2016) posits that the licensing aspect of electronic resource management significantly influences ER librarianship identity. One reason why Zhu's (2016) findings are insightful is due to the fact that we often license electronic resources rather than buy them.

The crux centers around copyright law, which provides librarians with an important legal justification for lending works: the First Sale Doctrine. Copyright law provides copyright owners with the right to distribute their work, but the First Sale Doctrine holds that if you buy a copy of a book, then you have a right to lend or sell your copy. This doctrine is fundamental to librarianship but also raises problems since most digital works (ebooks, etc) are licensed and not bought by libraries. Thus the First Sale Doctrine does not apply to such works. That ALA has a guide on this issue (ALA, 2022).

Stachokas (2018), Hulseberg (2016), and Zhu (2016) present the historical and environmental forces that have shaped the work of electronic resource librarians and their professional identities, and each of these authors discuss important themes that function as evidence of these identities. In our discussions this week, we should focus on these themes and how to make sense of them.

Conclusion

Electronic resource librarianship is a fascinating area of librarianship. Because digital technologies have woven their way into all parts of the modern library ecosystem, and because these digital technologies bring with them a slew of technical, legal, and social challenges, electronic resource librarians have had, as these technologies have developed, to maintain a holistic view of this ecosystem just as they have had to specialize in key areas that require maintaining that holistic, interconnected view.

The course takes that holistic view and divides it into four parts for study. In the first part, we study the nature of the work itself: what it means to be an electronic resource librarian.

In the second part, we learn about the technologies that an electronic resource librarian uses and the conditions that shape these technologies. We will learn about integrated library systems (ILS) and how these systems conform (or not) to standards, and how they foster or obstruct interoperability and access.

In the third part, we focus on processes and their contexts. We will study the electronic resource librarian's workflow, the economics and the markets of electronic resources, what is involved in licensing these resources and negotiating with vendors.

At the end, we focus on patrons and end users; that is, those we serve. Because electronic resources are digital, when we use them, we leave behind traces of that usage. This means we will study how that usage is measured and what those measurements can validly say. Because usage leaves traces of personal information, we will examine topics related to the security of these resources and the privacy of those who use them. Electronic resources likewise means having to use websites and other e-resource interfaces, and hence we will study how electronic resource librarians are involved in user experience and usability studies.

Discussion Questions

As we start to address all of this, I want us to consider two questions:

  1. How do we manage all of this electronic stuff? Not only does it include complicated technology and has an impact on our patrons, but it involves all different sorts of librarians.
  2. What exactly is an electronic resource librarian? I like this basic question because, due to perhaps representations in the media (movies, TV shows, books) and the interactions we've had with librarians in our lifetimes, we all have pretty well-defined, whether accurate, images of what reference or cataloging librarians are. But what about an electronic resource librarian? This is something different, right? And it's not likely to be a position that's ever really captured and presented publicly.

Readings / References

Hulseberg, A. (2016). Technical communicator: A new model for the electronic resources librarian? Journal of Electronic Resources Librarianship, 28(2), 84–92. https://doi.org/10.1080/1941126X.2016.1164555

Stachokas, G. (2018). The Electronic Resources Librarian: From Public Service Generalist to Technical Services Specialist. Technical Services Quarterly, 35(1), 1–27. https://doi.org/10.1080/07317131.2017.1385286

Zhu, X. (2016). Driven adaptation: A grounded theory study of licensing electronic resources. Library & Information Science Research, 38(1), 69–80. https://doi.org/10.1016/j.lisr.2016.02.002

Optional Readings / Additional References

ALA. (2022, June 27). LibGuides: Copyright for Libraries: First Sale Doctrine. https://libguides.ala.org/copyright/firstsale

Breeding, M. (2006). OPAC sustenance: Ex Libris to serve up Primo. Smart Libraries Newsletter, 26(03), 1. https://librarytechnology.org/document/11856

Klitzing, N., Hoekstra, R., & Strijbos, J-W. (2019). Literature practices: Processes leading up to a citation. Journal of Documentation, 75(1). https://doi.org/10.1108/JD-03-2018-0047

Wikoff, K. (2011). Electronics Resources Management in the Academic Library: A Professional Guide. ABC-CLIO. http://www.worldcat.org/oclc/940697515

Desperately Seeking an ERM Librarian

Introduction

Class, in this section, I will:

  • frame the readings
  • discuss the readings, and
  • list some questions to guide our discussion for the week.

Our goal in this section is to understand how the job of electronic resource management (ERM) has changed throughout the years and to develop some ideas about where it is headed.

Framing

This week we read two articles (Hartnett, 2014, Murdock, 2010) that analyze electronic resource librarian job advertisements. Additionally, the reading list includes the NASIG core competencies for electronic resource librarianship. I suggest that you review the list of core competencies before you read the articles.

These articles are of interest since they capture a description of electronic resource librarianship in earlier years. There are many social, political, and economic conditions that have stayed about the same since these articles were published, and these conditions help fixate the work of the ERM librarian, but constant changes in technologies and types of electronic resources have meant that electronic resource librarians are constantly adapting to new work flows. Our Murdoch (2010) reading makes this point.

Though the technology differs, the descriptions and duties of ER jobs are still on the mark in many ways. I posted links to job announcements in the discussion forum to demonstrate this. Most of those job announcements were emailed to the SERIALST email list (please subscribe to it) and are not current job openings, but they were current within the last few years.

To follow up on more current advertisements for ER librarians, I did a Google search on August 15, 2022 using the following query:

"electronic resource librarian" job

Results are consistent with past qualifications listed in the links mentioned above and with the last section's discussions on the nature of ERM librarianship. I will withhold linking to these advertisements since I don't expect the links to persist as the positions get filled. But several sources outline the following requested qualifications, and I think the themes from the prior section come through in this list:

  • provide "consistent and reliable access to the Library's electronic resources and establishing workflows that maintain discovery and use of all Library collections"
  • analyze "feasibility of technical platforms"
  • "resource licensing and contracting"
  • "copyright compliance"
  • "vendor negotiations"
  • "Acts as a bridge across multiple Library units"
  • enhance "access and use"
  • design a "system of access"
  • analyze "staff and user issues with discovery of resources"
  • coordinate "system administration responsibilities for integrated library system (Ex Libris Alma/Primo VE) with Technical Services Librarian"
  • "create reports in Alma and individual databases as needed, including but not limited to usage statistics, user experience statistics, collection analysis and overlap"
  • monitor "listservs for Alma and Primo VE"
  • manage "administrative and troubleshooting functions for EZ Proxy and e-resource vendors for access and authentication"
  • evaluate "the scope and quality of research resources available"
  • "reviews and negotiates licenses for [...] purchased resources and manages the acquisition, activation, and troubleshooting of all purchased and subscribed electronic resources for the Library."
  • gathers and analyzes "serials, e-books, database usage, and other related assessment data"
  • oversees "the activities of the serials acquisitions unit"
  • maintain "responsibility for licensing and the management of electronic information resources [...] as well as the shared consortial resources"
  • assist in "planning and developing policies and workflows"
  • partner "with the Acquisitions Librarian and staff on the ordering and payment activity of electronic resources"
  • establish and "documents library procedures and best practices for the acquisition, licensing, implementation, assessment, and budgeting of electronic resources"
  • work "with colleagues [...] to optimize resource discovery"
  • work "with vendors to resolve technical issues and manages EZproxy for remote access"
  • "collects, analyzes, and presents use, purchase, and availability data of electronic resources"
  • "works collaboratively to support metadata maintenance for electronic resources and both print and digital serials"
  • develop and implement "submissions to a shared University institutional repository"
  • works "under the supervision of the Associate Director for Technical Services"

To stay current about the position overall and about the specific duties involved, I encourage you to bookmark and stay abreast of relevant journals in the area. In addition to the two publication titles used in this section's readings and to the SERIALST list I have asked you to subscribe to, I recommend that you bookmark relevant journals, like the Journal of Electronic Resources Librarianship, Against the Grain, and Journal of Electronic Resources in Medical Libraries

Our Readings

Now let's start with Murdock's (2010) overview of the electronic resource librarian's position in the first decade of this century, and proceed to Harnett's (2014) work that describes the electronic resource librarian's duties around ten years ago. NASIG's core competencies were also published around ten years ago and have only received minor revisions since then. We conclude by considering the current job advertisements listed above and elsewhere. With these, our goal in this section is to get a sense of the electronic resource librarian's job duties and available technologies from the earlier part of this century to now; to understand where it was, how it has evolved, and where it might be headed. Overall, you will get a sense of just how much this position has changed in the intervening years, what I refer to as constant disruption in the next section.

One of the useful aspects of Murdock's (2010) article is in section 4.3, which lists a 'timeline of commercial ERM developments and standards.' These technologies and standards continue today, and have, via hindsight, strongly shaped how electronic resources currently work. This timeline includes what's now referred to as the A-Z list of serials, which started in 2000, OpenURL linking technology from 2001, the combining of both integrated library systems (ILS) and electronic resource management systems (ERMS) in 2004, federated searching from 2005, SUSHI usage statistics protocol from 2006, and SERU from 2007. We will explore each of these topics in future sections.

Murdock's (2010) analysis shows that some duties started to wane in the early years. Website maintenance and deployment (see Fig 13), for example, completely dropped off the radar. This was likely due to the development of content management systems, i.e., turnkey website solutions, that are still in use today. For example, I remember Joomla and Drupal, both content management systems that can work as library websites, taking off around 2007 to 2009, around the time that Murdock shows this area of activity declining.

One of Murdock's (2010) conclusions is that employers sought "to hire those who are able to perform traditional librarian duties, such as reference and instructional service, in addition to e-resource specific tasks" (p. 37). Based on what we see in job advertisements today, this seems to be much less the case as electronic resource librarians have become more specialized, as the position has divided into two areas (cf., technical services aspect and collection development aspect), and as electronic resources have grown and become more dominant in the intervening years. ER librarians still liaison with their communities but in different ways (i.e., not as reference librarians).

A big change since 2012, when Harnett's (2014) research ends, is that more technology has moved to the cloud. What this means is that we rely less on onsite servers, where those servers might be managed by library IT (or other IT). Switching from local IT infrastructure to cloud based IT infrastructure requires different types of technological skills and suggests that conceptual knowledge is paramount. (This is my opinion, but it's based on years of doing and teaching systems administration work.) For example, it's more important to have a conceptual understanding of how metadata works than to know how to use some piece of software for managing metadata (aside: however, conceptual and practical knowledge cannot so easily be divorced from each other). Metadata standards and schemes change far less often than the software used to enter or administer metadata.

Cloud-based solutions have changed the field in other ways. Hosted software means the software isn't purchased but leased, and leasing involves outside vendors who must have the technological skill sets and resources to manage and provide the technology. So we rely, in very important ways, and are more dependent on other actors in the publishing and e-resource provider ecosystem. That involves a lot of trust, and it involves more negotiations between librarians and vendors, and the ability to collaborate and develop good, ethical relationships. The kind of work that may increase, as more software and data are hosted on the cloud, might be the kind of work that requires strong communication skills, strong negotiating skills, and knowledge of how licensing works, how copyright and contract law work, and how electronic resource collections work and inter-operate across platforms. However, again, this can be complicated. A couple of years ago there was an email on the SERIALST listserv by a librarian that asked whether librarians have retained powers to negotiate and sign contracts. The responses were mixed. Some librarians had the jurisdiction but many had lost it, per the email responses. Even under this scenario, though, technological understanding is necessary in order to negotiate the best deal for a library's stakeholders and to acquire the best and most seamless product needed by library users.

Even though IT continues to be outsourced, this doesn't mean that we become lax in our understanding of how the technology works; just as we can't become lax in how librarianship works even though librarians answer fewer reference questions than they did in previous years (i.e., you still need to know how to respond systematically and thoroughly to research and reference questions). What I mean is that, in order to communicate this topic well, to negotiate well, and to sign licenses that are beneficial to our communities, it helps to understand and be adept at the tech so that we are not bamboozled in those negotiations. Also, if something goes wrong, e.g., with the link resolver technology, we have to learn how to identify the problem; that is, whether the technological issue is the link resolver technology and not something else, like the OPAC technology.

Conclusion

I want you to think about these job advertisement studies in relation to what you are learning about electronic resources as well as in relation to the kinds of advertisements you've seen since whenever you started paying attention to them, like those outlined at the beginning of this section. In essence, think about where you see yourself in these advertisements and how they impact you.

Although it might be the case that most of you haven't had a chance to learn electronic resource back ends, that doesn't disqualify you from attempts to start reflecting on this part of librarianship. As library users of these technologies, you have gathered enough abstract and practical knowledge to know enough.

Questions for Discussion

We'll soon move away from reflective questions and get our hands dirty with specific technologies, licensing, etc., but for now let's reflect on the following questions:

  • Where do you think you stand in comparison to ERM job ads and to NASIG Core Competencies?
    • Where are you strong?
    • Where would you like to improve?
  • What can you (and we, as a course community) do to help each of you get there?
    • What's your path?
    • What can you practice?

Readings / References

Hartnett, E. (2014). NASIG’s Core Competencies for Electronic Resources Librarians Revisited: An Analysis of Job Advertisement Trends, 2000–2012. The Journal of Academic Librarianship, 40(3), 247–258. https://doi.org/10.1016/j.acalib.2014.03.013

Murdock, D. (2010). Relevance of electronic resource management systems to hiring practices for electronic resources personnel. Library Collections, Acquisitions, and Technical Services, 34(1), 25–42. https://doi.org/10.1016/j.lcats.2009.11.001

NASIG Core Competencies Task Force. (2021, April 5). NASIG Core Competencies for E-Resources Librarians. https://nasig.org/Competencies-Eresources

Constant Disruption

Introduction

I think we might conclude at this point that there are many different ways to frame the role of the electronic resource librarian. In The ERM Librarian chapter, Stachokas (2018) showed how the electronic resource librarian works across technical services and collection development and how this requires a holistic view as well as a specialized understanding of the various processes involved. Huleberg (2016) illustrated how framing electronic resource librarianship as a technical communicator yields important insights into the work and profession. Zhu (2016) used the licensing aspect of electronic resource management work to show how central this activity is to the field's identity.

In the Desperately Seeking an ERM Librarian section, we reviewed a list of qualifications from current job advertisements for electronic resource librarian positions. In Murdock (2010), we learned how these kinds of qualifications are tied to various technological developments. As the technology changes, and it changes a lot (see aside below), so do the qualifications. One of the big takeaways from Harnett's (2014) article, for me at least, is that conceptual knowledge of the relevant technology is more important than practical knowledge, although the two are not always so easily divorced from each other.

Aside: See The Library Technology Guides' page on The History of Mergers and Acquisitions in the Library Technology Industry to get a sense of how much change has taken place in the last 40+ years.

Based on the readings so far, I think it's safe to say that the work of electronic resource librarianship is one of constant disruption. And by that I mean, it might be more difficult to find a thread of continuity, from its early days to now, in this role than it would be in other areas of librarianship. That might be part of what makes this area so interesting, but it does present some challenges.

Framing

The first listed reading we have this week is by Marshall Breeding. Astute observers will note Breeding is one of the first cited authors in the two additional readings we have this week. We will read more from Breeding later, but I bring him up now because he oversees a website titled Library Technology Guides. If you would like to keep abreast of the recent news on the electronic resource industry, Breeding's website should be at the top of your list.

Breeding's What is ERM? article is a good one to start the week. He provides an outline of the various components of electronic resources, and he provides some historical context for those components.

The Focus on Academic Libraries is Misleading

One caveat, though: while all the articles on our list this week are focused on academic libraries, the terms, concepts, and processes described in these articles are relevant to other areas of librarianship such as public librarianship, school media librarianship, and so forth. Differences in processes and in some details will arise due to organizational or other contextual differences among these library types. Organizationally, for example, public libraries are connected to municipalities, county governments, state libraries, and to public library consortia. This presents unique organizational challenges, and it highlights the fact that municipal, county, or state laws will define how some processes must be handled and who must handle them. The same is true for public schools, and in such cases, school boards and school districts will likely be involved (see aside below). Contextual differences that shape electronic resource management include user communities. An academic library serves its students, faculty, staff, and perhaps the public to some degree. However, a public library serves its local community. Such differences will shape ERM workflows and other choices, like the chosen vendors and publishers. For example, academic libraries provide more scholarly sources, and public libraries provide more ebooks and audiobooks and less e-journals. This means that you might find services like OverDrive/Libby, Epic, and Hoopla offered by public libraries but not in academic ones, and that you will rarely find more advanced scholarly resources in public libraries. These differences in needs, due to different communities, will change the emphasis on some aspects of the ERM work flow.

Aside: As an example, see the impact that the current book bans are having on ebook providers and note all the parties involved in these situations, including: superintendents, county officials, county school system, school district employees, etc. NBC News

Another reason why our reading lists lean toward the academic setting is not because electronic resource management is not relevant to public or other types of libraries, but it is because academic librarians publish more about electronic resources and public librarians publish very little. For example, I conducted a search in LISTA with the following query, which returned just three results in 2021 when I first ran the query and four results in August 2022 when I ran it again. In fact, a new article wasn't published since the 2021 query. Rather, LISTA just added an addition item from 2015.

(DE "ELECTRONIC information resources management")  AND  (DE "PUBLIC libraries")

It does not get much better if I expand the query to include some additional thesauri terms. This query returns only six results in 2021 and seven in 2022:

(DE "PUBLIC librarians" OR DE "PUBLIC librarianship" OR DE "PUBLIC libraries") AND  (DE "ELECTRONIC resource librarians" OR  DE "ELECTRONIC information resources management")

However, if I focus that query on academic libraries, then the results increase substantially. The following query returns 51 hits in 2021 and 57 in 2022:

(DE "ELECTRONIC information resources management")  AND  (DE "ACADEMIC libraries")

And this query returns 82 results in 2021 and 90 in 2022:

(DE "ACADEMIC librarians" OR DE "ACADEMIC librarianship" OR DE "ACADEMIC libraries") AND  (DE "ELECTRONIC resource librarians" OR  DE "ELECTRONIC information resources management")

We could continue to explore LISTA or other databases for relevant material on ERM and public libraries, and we would find more. For example, more results are retrieved when I attach terms like e-resources, integrated library systems, discovery platforms, or ebooks to a public library search in LISTA. But the results are nearly always much less than academic library searches. Another example, my local library system, Lexington Public Library, uses the CARL integrated library system, but there doesn't appear to be any articles in LISTA about it since 2015.

Anyway, you get the picture. There simply isn't a lot of material on electronic resource management from the public library perspective, and a quick search in the school media sphere mirrors this issue, too. The following search return zero results in 2021 and in 2022:

(DE "LIBRARY media specialists")  AND  (DE "ELECTRONIC information resources management")

If you go into public librarianship or school media librarianship, I'd encourage you to publish on electronic resources. It would greatly benefit your peers and those of us who teach courses like this.

Back to Breeding. I understand some terms Breeding uses and the technologies he describes might still be new to us. Let me spend some time providing some additional background information and highlighting some things to look for in these three articles.

Librarians started to migrate to electronic resource management in the 1970s. Breeding mentions this. This is when it happened en masse, but the seeds were planted well before this. I published a paper a few years ago that provides a historical account of the first library automation project, which took place in the 1930s with Hollerith punched cards. By the 1960s, the primary use of computers was to manage circulation, and in the late 60s and early 70s, library automation focused on managing patron records. In the early 1970s, tools became available to manage and search bibliographic records. Hence, computers were first used mostly to manage the circulation of books, then patron records, which allowed patrons to check out works electronically, and then we had the ability to search for works. If a work of interest was located using these tools, the work could be retrieved from the shelves or ordered via interlibrary loan by snail mail. Full text search came much later with the introduction of better storage media, like CD-ROMs, and saw major growth with the introduction of the internet to more institutions in the 1980s and the web in the early 1990s, which at its heart is nothing more than a big document retrieval system.

In the process of migrating from print to electronic, all sorts of things had to change, but all that change rests on the major premise of librarianship: to provide access by organizing information in order to retrieve information. Although you may have often heard that libraries are ancient entities, libraries and librarianship as we understand them today did not modernize until the late 1800s but more so starting in the 1920s and 1930s. It was then that some in the profession began to hone in on the major complexities and challenges, social and technological, involved in organizing and retrieving information in order to provide access. The challenges with organizing and retrieving information that they identified nearly 100 years ago were indeed major and problematic, but fortuitous, because it gave rise to what we now called library science, the rigorous study of libraries, librarianship, collections, users, communities, and so forth.

Yet consider when those people laid the groundwork for a library science nearly 100 years ago, librarians only managed print and the primary means of accessing print collections was through a physical building. With the introduction of computer systems in the 1960s and with better networking technologies in the 1980s and 1990s, issues with organizing and retrieving information grew exponentially, and indeed, this exponential increase created new complexities and launched an entire new field, what we now call information science.

All right, back to the ground level. Let me highlight some key terms in Breeding's article. They include:

  • Finding aids
  • Knowledge bases
  • OpenURL link resolvers
  • ERM systems
  • Library service platforms (LSP)
  • Integrated library systems (ILS)

Unless you already have some solid experience with these things, and even after reading Breeding's article, these terms may still be abstract. So, this week, you have two major tasks:

  1. Find real, practical examples. Pick one or two of the above terms and see how they work in practice. Then come back here and tell us what you found. Use the articles to help you locate actual products or examples.
  2. Locate how these terms appear in either the Cote and Ostergaard (2017) article or the Fu and Carmen (2015). Note other terms that may appear in those two articles and comment on the role they play in the ERM work flow and the migration process.

Work flow

As you work on this task, I ask that you pay attention to the emphasis on work flow. The idea of a work flow is a major theme in this course because it is a major part of electronic resource management. We'll come back to the idea over and over this semester. Also, as we read the Cote and Ostergaard (2017) and the Fu and Carmen (2015) article, we will learn about how migrating to new systems is a major, expensive, and time-consuming project, and one of the great things about these two articles is that it documents the work flows used in these migrations. If you become involved some day in a migration process, you should use articles like these to assist you and to help you make evidence-based decisions about what you need to accomplish. And like what these authors have done, I'd encourage you to document and publish what you learned. In the print era, there were some cases where librarians had to migrate to new systems, too. For example, some research libraries in the U.S. started by classifying collections using the Dewey Decimal Classification system, but then began to convert to the Library of Congress Classification system after the mid-20th century. This was no small task. Today, migration is big business in the library world (see Breeding's site for other examples) because the technology changes fast and because there are a number of competing electronic resource management products that librarians can choose for their communities, which they might be inclined to do if the migration provides an advantage to their users, communities, and themselves.

Readings / References

Breeding, M. (2018). What is ERM? Electronic resource management strategies in academic libraries. Computers in Libraries, 38 (3). Retrieved from https://www.infotoday.com/cilmag/apr18/Breeding--What-is-ERM.shtml

Cote, C., & Ostergaard, K. (2017). Master of “Complex and Ambiguous Phenomena”: The ERL’s Role in a Library Service Platform Migration. Serials Librarian, 72(1–4), 223–229. https://doi.org/10.1080/0361526X.2017.1285128

Fu, P., & Carmen, J. (2015). Migration to Alma/Primo: A Case Study of Central Washington University. Chinese Librarianship: an International Electronic Journal. https://digitalcommons.cwu.edu/libraryfac/30/

Additional References

Burns, C. S. (2014). Academic libraries and automation: A historical reflection on Ralph Halstead Parker. Portal: Libraries and the Academy, 14(1), 87–102. https://doi.org/10.1353/pla.2013.0051

Ingram, D. (2022, May 12). Some parents now want e-reader apps banned - and they're getting results. NBC News. https://www.nbcnews.com/tech/tech-news/library-apps-book-ban-schools-conservative-parents-rcna26103

Chapter Two: Technologies and Standards

Chapter 2 is completed (9/21/2022).

ERM and ILS

Introduction

This week we learn about ERM and ILS software. What are these?

ILS, the Integrated Library System

ILS is an acronym for an integrated library system. We were introduced to the newer term library services platform (LSP) in the previous section. Although different in many ways (Breeding, 2015; Breeding, 2020), our discussions of the ILS and LSP this week are relevant to those types of integrated library systems that may also be library service platforms, the latter which are becoming more common these days.

The differences between ISP and LSP are both large and small. The main idea between them is the same in the sense that they are both "used by librarians to manage their internal work and external services," such as "acquiring and describing collection resources, making those resources available to their users through appropriate channels, and other areas of their [resource management] operations" (Breeding, 2020, para. 1).

Administration

In order to provide the above services, the ILS/LSP has an administrative interface that librarians use to manage their resources. The interface contains a set of modules that are common among most software solutions, although they may be named variously.

In the above list, I've linked to documentation on modules provided by the Evergreen open source ILS system. LibLime's Bibliovation LSP offers comparably named modules for discovery, circulation, cataloging, serials, acquisitions, and systems administration. Other ILS/LSP solutions may offer specific modules dedicated to other items in the list, or those functions might be integrated into one of the above modules. Alternatively, new modules appear in LSPs that that take advantage of special LSP abilities and digital assets. For example, the Alma LSP provides modules dedicated to acquisitions, resources, discovery (via Primo), fulfillment, administration, and analytics. Please take a moment and read about these modules in Evergreen's documentation and visit the Alma and LibLime links to learn more about their specifically LSP products.

User Interface

Each of you are familiar with an ILS/LSP from a user perspective and some of you are familiar with these systems from a librarian perspective. In your lifetimes, you have used OPACs (online public access catalogs) or discovery systems like InfoKat, which uses Alma's Primo discovery system, you have likely conducted a search for a serial, and you have most definitely borrowed a book from a library. The ILS/LSP makes these end user functions possible.

Until fairly recently, the OPAC was the primary way to locate and access items in library collections. However, in many ILS/LSPs the OPAC has evolved into a discovery system, depending on what and how it searches its records and other factors. The Encyclopedia of Knowledge Organization describes the differences as such:

OPACs replicated and extended the functionality of the card catalogues they largely replaced in providing a finding aid to the books, journals, audiovisual material and other holdings of a particular library. The term discovery system has come into use in the early Twenty-first century to describe public-facing electronic catalogues which use the technology of the Internet search engines to expand the scope of the OPAC to include not only library-held content, including entries for journal articles and book chapters that were not typically part of traditional library catalogues, but also material held elsewhere which may be of interest to clients (Wells, 2021).

In other words, OPACS generally searched against pre-defined fields that are recorded in MARC such as author, title, subject, etc. and searched collections, at first print but later electronic, held by the library. A discovery system can search additional text, if available, and can more easily link to items not in the library collection but which can be acquired through interlibrary loan. A discovery system can also integrate with bibliographic databases and return results indexed by those databases. This saves the user from having to know about specific topical databases. For example, UK Libraries provides access to over 700 databases, and thus having a discovery system that can access those is beneficial. However, none of the above mean that a discovery system, like InfoKat, is aware of the totality of a library's collections. (And it's not always clear what's left out.)

In Totality

These administrative and end user interfaces make up the totality of the ILS/LSP software. In short:

  • an administrative interface is used by librarians to manage tasks provided through modules.
  • a public interface, such as an OPAC or discovery system, is used by librarians and patrons to access the library's collections.

An ILS/LSP is therefore, as Stephen Salmon stated in 1975, a non-traditional way of doing traditional things, such as "acquisitions, cataloging, and circulation," but which is now fairly traditional!

Electronic resource librarians might work extensively with a resources or like module in order to administer the library's digital assets (e.g., contracts, etc.), but all librarians will use one or more of the ILS/LSP modules. For example, when I worked in reference at a small academic library, I used the Millennium ILS to check out books to users, to fix borrowing issues, and to search for works in the OPAC. Later I primarily used the cataloging module when I moved to technical services. What a librarian uses frequently depends on the organizational structure of a library. As our reading by Miller, Sharp, and Jones (2014) show, the rise in electronic resources has vastly influenced the ways librarians structure their organizations, whose structures were originally informed by the dictates of a "print-based world".

To learn more about the ILS and current iterations that we now call LSPs, see the links in the text above and visit the following:

ERM, the Electronic Resource Management System

ERM is an acronym for electronic resource management system. Its function is born from the need to manage a library's digital assets, for example, the licenses that a library has signed. In order to manage assets like licenses, the ERM can keep track of the signatories, the terms of the license/contracts, specific documents related to these processes, and more. An ERM may or may not be integrated with a library's ILS software. But it's most likely part of a LSP solution. The Alma LSP, for example, is a LSP that also provides electronic resource management.

Like the ILS/LSP, ERM software is generally divided into modules that focus the librarian's work on particular duties and allow librarians to create work flows and knowledge management systems. In an ERM like the open source CORAL system, the modules include:

  • Resources: a module "provides a robust database for tracking data related to your organization's resources ..." and "provides a customizable workflow tool that can be used to track, assign, and complete workflow tasks."
  • Licensing: a module for a "flexible document management system" that provides options to manage licensing agreements and to automate parts of the process.
  • Organizations: this module acts as a type of advanced directory to manage the various organizations that impact or are involved in the management of electronic resources, including "publishers, vendors, consortia, and more."
  • Usage Statistics: a module providing librarians with usage statistics of digital assets by platform and by publisher. Supports COUNTER and SUSHI. We'll cover COUNTER and SUSHI later in the semester, but as a preamble:
    • COUNTER "sets and maintains the standard known as the Code of Practice and ensures that publishers and vendors submit annually to a rigorous independent audit", and,
    • SUSHI is a type of protocol to automate collecting data on usage statistics.
  • Management: this module provides a document management system aimed at "storing documents, such as policies, processes, and procedures, related to the overall management of electronic resources".

Readings

In our readings this week, we have three articles that speak to ILS/LSP and ERM software solutions and the relationship between the two.

As Fournie (2020) notes, the electronic resource market is consolidating into a few heavyweights but that this trend does not have to force libraries into solutions that lead to vendor lock-in or acceptance of walled gardens. In the process, Fournie (2020) describes two ERM solutions: Coral and Folio. The author's descriptions are helpful in understanding what these two software solutions are capable of providing.

The readings by Miller, Sharp, and Jones (2014) and Bahnmaier, Sherfey, and Hatfield (2020) provides some context about how these technologies impact librarianship. Miller et al. (2014) describe a case study (the literature review is also helpful) that shows how electronic resources have impacted organizational structure, job titles, budgets, and more. Likewise Bahnmaier et al. (2020) discuss aspects of this as well as reflect on various changes in library staffing and how this raises the importance of the library-vendor relationship.

Conclusion

With that background in mind, in this week's forum, I'll introduce you to the following systems:

We will see what services and modules they provide, and how they function. Be sure to visit the links in this page, especially any documentation. I'll ask that you log into the relevant services, test the demos sites, or watch the demo videos. This will help get some hands-on experience with them and also demystify what each do.

Addendum

In prior semesters, we read articles by Wang and Dawes (2012) and Wilson (2011). I replaced those readings for the Fall 2022 semester, but for those interested, I briefly describe them below.

In the article by Wang and Dawes (2012), the authors describe the "next generation integrated library system", which should meet a few criteria that include the ability to merge ILS software with ERM software. ERM software solutions exist because integrated library systems (ILS) failed to include functions to manage digital assets. Basically, the ILS was still behaving with a print-mindset, so to speak, and was growing stagnant. Around the time the article was published, more ILS and ERM software began moving to the cloud, as was common among many software markets. This changed the game because it placed a bigger burden on software companies to maintain their software. Based on demand and need, the LSP was created as a next-generation ILS that included ERM functionality. So it's likely that even though the LSP might replace the ILS/ERM combo someday, it could be that we'll live in a dual world where some libraries use a LSP and some use the LSP/ERM combo.

Despite the technical aspects of these solutions, at its basic, both ILS/LSP and ERM software solutions focus on managing assets (books, serials, realia, etc) so that librarians can organize and users and librarians can retrieve those assets. There's no requirement to use any solution offered by a library vendor, and that's the point of the Wilson (2011) article, which shows how regular software can be used to function as a homegrown solution for creating and implementing an ERM work flow.

Readings / References

Bahnmaier, S., Sherfey, W., & Hatfield, M. (2020). Getting more bang for your buck: Working with your vendor in the age of the shrinking staff. The Serials Librarian, 78(1–4), 228–233. https://doi.org/10.1080/0361526X.2020.1717032

Fournie, J. (2020). Managing electronic resources without buying into the library vendor singularity. The Code4Lib Journal, 47. https://journal.code4lib.org/articles/14955

Miller, L. N., Sharp, D., & Jones, W. (2014). 70% and climbing: E-resources, books, and library restructuring. Collection Management, 39(2–3), 110–126. https://doi.org/10.1080/01462679.2014.901200

Optional Readings / Additional References

Anderson, E. K. (2014). Chapter 4: Electronic Resource Management Systems and Related Products. Library Technology Reports, 50(3), 30–42. https://journals.ala.org/index.php/ltr/article/view/4491

Breeding, M. (2015). Library Technology Reports, 51(4). Chapters 1-5. https://journals.ala.org/index.php/ltr/issue/view/509

Breeding, M. (2020). Smart libraries Q&A: Differences between ILS and LSP. Smart Libraries Newsletter, 40(10), 3–4. [https://librarytechnology.org/document/25609][breeding2022]

Hosburgh, N. (2016). Approaching discovery as part of a library service platform. In K. Varnum (Ed.), Exploring Discovery: The Front Door to your Library’s Licensed and Digitized Content. (pp. 15-25). Chicago, IL: ALA Editions. https://scholarship.rollins.edu/as_facpub/138/

Salmon, S. R. (1975). Library automation systems. New York: Marcel Dekker.

Wang, Y., & Dawes, T. A. (2012). The Next generation integrated library system: A promise fulfilled? Information Technology and Libraries, 31(3), 76–84. https://doi.org/10.6017/ital.v31i3.1914

Wells, D. (2021). Online public access catalogues and library discovery systems. In B. Hjørland & C. Gnoli (Eds.), Encyclopedia of Knowledge Organization (Vol. 48, pp. 457–466). https://www.isko.org/cyclo/opac

Wilson, K. (2011). Beyond library software: New tools for electronic resources management. Serials Review, 37(4), 294–304. https://doi.org/10.1080/00987913.2011.10765404

Standardizing Terms for Electronic Resource Management

Introduction

Awhile ago now, I conducted some historical research on a librarian named Ralph Parker. Inspired by technological advances in automation, specifically the use of the punched cards and machines, Parker began to apply this technology to library circulation processes in the 1930s and thus became the first person to automate part of the library's work flow. By the mid-1960s, Parker's decades long pursuit of library automation led to some major advances, including the founding of OCLC. Meanwhile, the punched card system he continued to develop eventually led to massive increases in circulation and better service to patrons. In the mid-60s he wrote the following about the installation and launch of a new punched card system to help automate circulation:

To the delight of the patrons it requires only four seconds to check out materials (as cited in Burns, 2014).

I think about that quote often. When I read that in his annual report in the archives at the University of Missouri, I could feel his giddiness with these results. Until this achievement, when a patron borrowed an item from the library, the process involved completing multiple forms in order to be sure that accurate records were kept. Accurate record keeping is important. Libraries need to protect their collections but also provide access to them. As stated in the Flexner (1927):

it is necessary that the library have control of these circulating books in several ways. It [the library] must know where they are, it must lay down rules to see that thoughtless people do not retain the books in their possession unfairly, and it must provide means for securing their prompt return. These and many other considerations combine to make it necessary for the [ circulation ] department to install and maintain very efficient methods to control the circulation of books, which are commonly known as routines (p. 6).

What were those routines in the 1930s and thenabouts? Why was Parker so excited about his system taking only four seconds to check out a work? Well, two routines are important for circulation. The first involves membership and the second involves charging or checking out works.

Membership

First, if the patron was not yet a member of a library, then they had to register to become one; hence, the first routine was to check their membership and register them as borrowers if they were not yet a member or if their membership had expired. If this was a public library, then the process might vary a bit depending if the member was an adult or a youth (or juveniles in the lingo of the time). Regardless, this routine basically involved completing an application card, creating a member record and filing that away for the library's use, and then giving the borrower a card of their own, i.e., their borrower's card.

Charging

Once membership status was confirmed or created, then the circulation librarian employed a system to charge books to the borrower. Several systems had been employed up through the late 1920s, including the ledger system, the dummy system, the temporary slip system, the permanent slip or card system, the Browne system, and eventually the Newark charging system (see Flexner, 1927, pp. 73-82 for details). Assuming the librarian in the 1930s used the Newark system, when a book was to be checked out, the librarian needed to enter the details on a "book card, a date slip and a book pocket for each book" (Flexner, 1927, p. 78). Flexner goes on to outline the process:

The date slip is pasted opposite the pocket at the back of the book. The date which indicates when the book is due to be returned or when issued is stamped on each of three records, the reader's card, the book card and the date slip. The borrower's number is copied opposite the date on the book card. The date on the date slip indicates at once the file in which the book card is to be found, and the [librarian] assistant is able to discharge the book and release the borrower immediately on the return of the volume (Flexner, 1927, pp. 78-79).

In essence, charging books or works to patrons involved a lot of paperwork, and you can imagine that it might be prone to error. However, the number of systems at the time and the discussions and debates around them show that the processes and routines were steadily becoming standardized, and standardization is a necessary pre-requisite to automation.

Parker's achievement in automation eventually improved the library experience for patrons as well as the librarians at the circulation desk, and indirectly their colleagues throughout the library. That is, once circulation standards stabilized, and once technology like punched cards became available generally, then it became possible to automate this process for the library. And this was good; the effects were that automation increased circulation and that an automated circulation process Saved The Time Of The Reader, down to four seconds to be exact!

This is all to say that standards and technology go hand and hand and that the details matter when thinking about standards. How does this relationship work? Standards enable multiple groups of competing interests to form consensus around how technologyshould work, and when this happens, multiple parties receive payoffs at the expense of any single party acquiring a monopoly. This is true for the design of screwdrivers, the width of railroad tracks, the temperature scale, and certainly also for how information is managed and exchanged. The internet and the web wouldn't exist or definitely not exist as we know it if not for the standardization of the Internet Protocol (IP), the Transmission Control Protocol (TCP), the Hypertext Transfer Protocol (HTTP), and other internet and web related technologies that enable the internet and the web to work for so many users regardless of the operating system and the hardware they use.

Readings

Our first article this week by Harris (2006) covers the basic reasons for the existence of NISO (the National Information Standards Organization) and the kinds of standards NISO is responsible for maintaining and creating. These standards are directly related to libraries and fall under three broad categories related to Information Creation & Curation, Information Discovery & Interchange, and Information Policy & Analysis. There are standards that touch on bibliographic information, indexing, abstracting, controlled vocabularies, and many other library important issues. If you have not before paid attention to NISO, you might now start seeing more references to the organization and the standards it publishes, especially because the international library community has worked closely with NISO to develop standards for various aspects of library work.

Another historical note: As Harris (2006) elaborates in the article, NISO came into existence in the mid-1930s. This was about the same time that Ralph Parker began working on his punched card system. Not long before this, in the late 1920s, the first library science graduate program launched at the University of Chicago, and in the early 1930s, the first research based journal started, The Library Quarterly. We often hear how long libraries have existed, and it's true that there were quite a few accomplishments before the 1930s, but it is this time period (for these and a number of other reasons) that marks the modern era of libraries.

We also are not simply interested in standardizing things like the forms used to catalog and charge a book, to create member records, to draw up licenses for an electronic resources, as we'll discuss later. We are also interested in standardizing, as Flexner (1927) would say, "routines", processes, or workflows. Thus, our additional readings are on TERMS, or Techniques for Electronic Resource Management. TERMS is not a true standard, but more of a de facto or proposed standard that helps outline the electronic resource management work flow. It was developed in order for librarians or others dealing with electronic resources to come to a consensus on the processes of electronic resource management. Version 1 of TERMS is described by the TERMS authors in an issue of Library Technology Reports. Although it has been replaced by a newer version, it still functions as a thorough introduction to the ERM work flow and provides guidance and suggestions on all aspects of electronic resource management. For example, in chapter 7 of the LTS report on TERMS version 1, the authors provide information on the importance of working with providers or vendors in case of cancellation of a resource. They write:

Do not burn any bridges! Many resources have postcancellation access, which means you need to keep up a working relationship with suppliers; this might also incur a platform access fee going forward, so this needs to be budgeted for in future years. Review the license to fully understand what your postcancellation rights to access may be. In addition, you may resubscribe to the resources in future years. Content is bought and sold by publishers and vendors. Therefore, you may end up back with your original vendor a year or two down the line!

Some of this material is repeated in version 2 of TERMS, but version 2 was created in order to address changes and to include more input from the community. Version 2 also includes a slightly modified outline, and includes the following parts:

  1. Investigating new content for purchase or addition
  2. Acquiring new content
  3. Implementation
  4. Ongoing evaluation and access, and annual review
  5. Cancellation and replacement review
  6. Preservation

At the same link just provided, they also write about this new version:

In addition to the works mentioned or cited in the original TERMS report, much has been written in the past few years that can help the overwhelmed or incoming electronic resources librarian manage their daily workflow. In the end, however, most of the challenges facing the management of electronic resources is directly related to workflow management. How we manage these challenging or complex resources is more important than what we do, because how we do it informs how successful and how meaningful the work is, and how well it completes our goal of getting access to patrons who want to use these resources.

As such, the outline and the content described in these two versions of TERMS is very much centered on the ERM work flow. TERMS is a guide and framework for thinking on the different aspects of the electronic resource life-cycle within the library, and it helps provide librarians with a set of questions and points of investigation. For example, let's consider Term item 1, which is to investigate new content for purchase or addition. In a presentation by the Emery and Stone (2014), they suggest that this involves the following steps, partly paraphrased:

  • outline what you want to achieve
  • create a specification document
  • assemble the right team
  • review the market and literature and set up trial
  • speak with suppliers and vendors
  • make a decision (Emery and Stone, slide 12, 2014)

Emery and Stone (2014) provide other examples, and the TERMS listed in this slide are from the first version. TERM no. 6, PRESERVATION, was added in version 2, and TERMS nos. 4 and 5 from version 1 were joined together.

Exercise

This week you have a two part exercise:

First

Visit the NISO website and search for documentation on a standard, recommended practices, or technical reports and post about it. The differences between these publications follows:

Technical reports:

NISO Technical Reports provide useful information about a particular topic, but do not make specific recommendations about practices to follow. They are thus "descriptive" rather than "prescriptive" in nature. Proposed standards that do not result in consensus may get published as technical reports.

Recommended Practices:

NISO Recommended Practices are "best practices" or "guidelines" for methods, materials, or practices in order to give guidance to the user. These documents usually represent a leading edge, exceptional model, or proven industry practice. All elements of Recommended Practices are discretionary and may be used as stated or modified by the user to meet specific needs.

Published and Approved NISO Standards:

These are the final, approved definitions that have been achieved by a consensus of the community.

See https://www.niso.org/niso-io/2014/03/state-standards for the descriptions.

Second

After reading about TERMS, try to place these TERMS in additional electronic resource management context. Please draw from your experience using the ILS and ERM software, from the readings, your personal work experience in a library, if you have that, or use your imagination. Specifically, it would be interesting if you could pick out aspects of systems like Coral or Folio that appear to facilitate standardized workflows.

Sources for NISO Tasks

References

Emery, J., & Stone, G. (2017, March 17). Announcing TERMS ver2.0. TERMS: Techniques for electronic resource management. https://library.hud.ac.uk/archive/projects/terms/announcing-terms-ver2-0/

Harris, P. (2006). Library-vendor relations in the world of information standards. Journal of Library Administration, 44(3–4), 127–136. https://doi.org/10.1300/J111v44n03_11

Heaton, R. (2020). Evaluation for evolution: Using the ERMI standards to validate an Airtable ERMS. The Serials Librarian, 79(1–2), 177–191. https://doi.org/10.1080/0361526X.2020.1831680

Hosburgh, N. (2014). Managing the electronic resources lifecycle: Creating a comprehensive checklist using techniques for electronic resource management (TERMS). The Serials Librarian, 66(1–4), 212–219. https://doi.org/10.1080/0361526X.2014.880028

Optional Readings / Additional References

Burns, C. S. (2014). Academic libraries and automation: A historical reflection on Ralph Halstead Parker. Portal: Libraries and the Academy, 14(1), 87–102. https://doi.org/10.1353/pla.2013.0051, or: http://uknowledge.uky.edu/slis_facpub/6/

Breeding, M. (2015). Library Technology Reports, 51(4). Chapters 1-5. [https://journals.ala.org/index.php/ltr/issue/view/509][breeding2015]

Emery, J., & Stone, G. (2013). Library Technology Reports, 49(2). Chapters 1-8. https://journals.ala.org/index.php/ltr/issue/view/192

Emery, J., & Stone, G. (2014, July). Techniques for Electronic Resource Management (TERMS): From Coping to Best Practices [Conference]. 2014 AALL Annual Meeting and Conference, Henry B. Gonzalez Convention Center San Antonio, TX. http://eprints.hud.ac.uk/id/eprint/19420/

Flexner, J. M. (1927). Circulation Work in Public Libraries. American Library Association. https://hdl.handle.net/2027/mdp.39015027387052

Interoperability

Introduction

In this section we cover what it means for technology to be interoperable using OpenURL link resolvers.

Problem

We take it for granted that, on the open web, we can seamlessly follow links to websites and webpages, or do so without too much fuss. It gets more complicated when we want access to works that are behind paywalls, despite where such works have been found: search engines, bibliographic databases, OPACs, or discovery services. In such cases, direct links to such sources will not always work.

The complication is that, when a library subscribes to a journal or a magazine, access to that journal or magazine is provided through various services and not necessarily through the publisher's default site. Also, libraries provide multiple discovery points and multiple ways to access the same works, such as through bibliographic databases with overlapping scopes. Bibliographic databases can tell us that an item exists when we search for it, but a library may not subscribe to the publication or the item might be in the stacks, stored off site or at another library altogether. All these problems, in conjunction with the paywalled problem, which necessitates additional layers, like proxy servers that function to authenticate library users, complicate access.

Let's consider an example. The journal Serials Librarian is published by Taylor & Francis Online / Routledge, and has the following site as its homepage:

https://www.tandfonline.com/journals/wser20

The journal is indexed in EBSCOhost's Library, Information Science & Technology Abstracts (LISTA) database and in ProQuest's Social Science Premium Collection (SSPC) database, among other places (e.g., it can also be found in Google Scholar, Google Search, a library's discovery platform, and more). This means that an article like the following can show up based on a query on any of the above platforms, even if none of these search or discovery platforms provide full text access to the article:

Brown, D. (2021). "Through a glass, darkly:: Lessons learned starting over as an electronic resources librarian. The Serials Librarian, 81(3–4), 246–252. https://doi.org/10.1080/0361526X.2021.2008581

One way to know if our library provides access to the above source and others like it is through a link resolver. We see UK's link resolver in action whenever we see a View Now @ UK button or link. When we click on that button or link in someplace like LISTA or SSPC, we trigger the database's role in the link resolver for that article, and that routes us through the library's discovery service. In LISTA, that link looks like this:

https://web-p-ebscohost-com.ezproxy.uky.edu/ehost/SmartLink/OpenIlsLink?sid=9508afc3-4f38-4b9d-b680-71981313e0dd@redis&vid=5&sl=smartlink&st=ilslink_new&sv=sdbn%253Dlxh%2526pbt%253DAcademic%2520Journal%2526issn%253D0361526X%2526ttl%253DSerials%252520Librarian%2526stp%253DC%2526asi%253DY%2526ldc%253D%2526lna%253DAlma%252520Linking%2526lca%253DfullText%2526lo_an%253D156075536&su=https%3A%2F%2Fsaalck-uky.primo.exlibrisgroup.com%2Fopenurl%2F01SAA_UKY%2F01SAA_UKY%3AUKY%3FID%3Ddoi%3A10.1080%252F0361526X.2021.2008581%26genre%3Darticle%26atitle%3D%2522Through%2520a%2520Glass%252C%2520Darkly%2522%253A%2520Lessons%2520Learned%2520Starting%2520over%2520as%2520an%2520Electronic%2520Resources%2520Librarian.%26title%3DSerials%20Librarian%26issn%3D0361526X%26isbn%3D%26volume%3D81%26issue%3D3%252F4%26date%3D20220701%26au%3DBrown%2C%20Daniel%26spage%3D246%26pages%3D246-252%26sid%3DEBSCO%3ALibrary%252C%2520Information%2520Science%2520%2526%2520Technology%2520Abstracts%3A156075536

In Social Science Premium Collection, the link looks like this:

https://www.proquest.com/docview.accesstofulltextlinks.detailsorabstractoutboundlinks.externallink:externallink/https:$2f$2fsaalck-uky.primo.exlibrisgroup.com$2fopenurl$2f01SAA_UKY$2f01SAA_UKY:UKY$3furl_ver$3dZ39.88-2004$26rft_val_fmt$3dinfo:ofi$2ffmt:kev:mtx:journal$26genre$3darticle$26sid$3dProQ:ProQ$253Alibraryscience$26atitle$3d$2526ldquo$253BThrough$2ba$2bGlass$252C$2bDarkly$2526rdquo$253B$253A$2bLessons$2bLearned$2bStarting$2bover$2bas$2ban$2bElectronic$2bResources$2bLibrarian$26title$3dThe$2bSerials$2bLibrarian$26issn$3d0361526X$26date$3d2021-11-01$26volume$3d81$26issue$3d3-4$26spage$3d246$26au$3dBrown$252C$2bDaniel$26isbn$3d$26jtitle$3dThe$2bSerials$2bLibrarian$26btitle$3d$26rft_id$3dinfo:eric$2f$26rft_id$3dinfo:doi$2f10.1080$252F0361526X.2021.2008581/MSTAR_2645781371/LinkResolver/1193?t:ac=2645781371/Record/D137B205B8D14795PQ/1

Clicking on either of the above in their respective databases will send us to Primo, UK Library's discovery layer.

If we had clicked on EBSCOhost's View Now link, the Primo link will result in the following:

https://saalck-uky.primo.exlibrisgroup.com/discovery/openurl?institution=01SAA_UKY&vid=01SAA_UKY:UKY&date=20220701&issue=3%2F4&isbn=&spage=246&title=Serials%20Librarian&atitle=%22Through%20a%20Glass,%20Darkly%22:%20Lessons%20Learned%20Starting%20over%20as%20an%20Electronic%20Resources%20Librarian.&sid=EBSCO:Library,%20Information%20Science%20%26%20Technology%20Abstracts:156075536&volume=81&pages=246-252&issn=0361526X&au=Brown,%20Daniel&genre=article&ID=doi:10.1080%2F0361526X.2021.2008581

If we look closely at those links (scroll to the right to view their entirety), you can see that the article's metadata is embedded in the URLs. Among other things, you can see the publication title, the article title, the author's name, the DOI, and more.

That metadata is used to trigger a search query in the library's discovery platform (at UK, that's InfoKat Discovery by Primo). It specifically initiates a GET HTTP Request, which is designed to request data from a resource/server, in this case InfoKat Discovery, using the metadata embedded in the URLs as seen above.

This is the work of an OpenURL link resolver, among other technologies, which are designed to provide access to a target despite their source by initiating queries in an OPAC or discovery platform using the metadata embedded in a URL.

This is a technical solution to the paywall problem that is designed to help users of electronic resources access a source in a library's collection based on a citation/record discovered in a search result, an article's list of references, or wherever else the link resolver might show up. It is meant to work for all items in a library's collection, including print items, since print items have records in the catalog or discovery service.

Use Cases

Google Scholar Example

Let's imagine a search in Google Scholar, and that as a result of this search, we identify a paywalled article that we wish to retrieve from the library. If we have made Google Scholar aware that we are affiliated with a specific library, by that affiliation Google Scholar becomes aware of a library's collections (through a knowledge base). Then Google Scholar will use a library's link resolver service to retrieve a target from a library's collections using the following process:

  1. The metadata about the article will be extracted from Google Scholar (aka, the source).
  2. More metadata will be added about the institution (administrative metadata, like an institutional ID number).
  3. The metadata is converted into a URL query that queries the library's collections in the discovery service, and
  4. The user is then presented with target options (or taken directly to the work) for retrieving the article.
    • the options may include full text access from various and possibly multiple vendors or publishers, information about the physical location (e.g., on the shelves) if it exists there, or options to request the work through interlibrary loan. Ideally, it will lead the user directly to the full text.

To link Google Scholar to an affiliation:

  1. Go to https://scholar.google.com/
  2. Open Settings
  3. Click on the Library Links tab
  4. Search for your affiliation
    • e.g., University of Kentucky
  5. Add and save

Now when you search in Google Scholar, you should see View Now @ UK links (if your affiliation is University of Kentucky) next to search results that your affiliation has in its collections.

See Link Resolver 101 for additional details and this historical piece on link resolvers (McDonald & Van de Velde, 2004).

Consider a basic keyword search in Google Scholar for the term electronic resources. One of the first items listed in the results page is an article titled "Electronic resources: access and usage at Ashesi University College." If we've connected our library to Google Scholar, then we should see a View Now @ UK link to the right of our search result list. In the following OpenURL, we can see the article's metadata and also that google is the source (just as we could see that ebscohost and proquest were the sources in the URLs above).

https://saalck-uky.primo.exlibrisgroup.com/openurl/01SAA_UKY/01SAA_UKY:UKY?sid=google&auinit=PS&aulast=Dadzie&atitle=Electronic+resources:+access+and+usage+at+Ashesi+University+College&id=doi:10.1108/10650740510632208

Full text for that article is provided by Emerald eJournals Premier, and Emerald is the original publisher of this journal and provides the original view of the article. That means that Primo next hands off our query to UK Library's proxy service, EZProxy, which asks us to authenticate ourselves with our university account login information, and then takes us to a copy of that full text from the provider. The article is also available as full text through two ProQuest databases, but Emerald's view takes precedence since it's the original publisher.

If other resources provided access, like the EBSCOhost and ProQuest databases, and not the original publisher, like Emerald, we would stop on Primo and be offered to select which database we would like to use to view the full text.

In our case, since we only have one option, the Primo to EZProxy to the Emerald full text view transfers happen quickly.

Dissecting an OpenURL

Let's take a closer look at the Primo URL. By looking at its components, we see that it's an OpenURL link, and we can see the fields and values and identify the metadata. The percent signs and numbers in the title field use Percent-encoding. Percent-encoding is a process used to encode characters that are URL unfriendly, like empty spaces between words, into characters that can be parsed. See this page for a table of UTF-8 percent-encodings and the characters they match. I have inserted newlines into the Primo link below to enable readability:

https://saalck-uky.primo.exlibrisgroup.com/discovery/openurl?
institution=01SAA_UKY&
vid=01SAA_UKY:UKY&
aulast=Dadzie&
id=doi:10.1108%2F10650740510632208&
auinit=PS&
atitle=Electronic%20resources%20access%20and%20usage%20at%20Ashesi%20University%20College&
sid=google

The link resolver technology translates the metadata embedded in the above link as needed for the appropriate service. The institution, vid, and sid fields are administrative metadata that identify the source information. The key fields used to retrieve the record for this source are the:

  • aulast for author's last name
  • id for the DOI
  • auinit for the author's first two initials
  • atitle for the article title

In Case of Interlibrary Loan

We can see another instance of this within Primo itself. Here I search for the phrase electronic resources and filter by WorldCat options from the drop down box to the right of the search box. By filtering for WorldCat options, I'm more likely to retrieve records that are not in UK Library's collections.

The first option is a work titled Electronic Resources. Selection and bibliographic control. Since this is not available via UK Libraries, I would have to request the item through interlibrary loan. When I do that, the link resolver triggers ILLiad, which is used for interlibrary loan. Note how the OpenURL looks much different here. Essentially, the OpenURL is contextual, and its context reflects the service being used (i.e., EBSCOhost, ProQuest, Google Scholar, Primo, Illiad, etc.) which determines the metadata elements in the URL. Note that some elements are empty (e.g., rft.date=& is an empty value for the date field versus rft.genre=book&, which holds the value book for the genre field).

https://lib.uky.edu/ILLiad/illiad.dll?
Action=10&
Form=30&
rft.genre=book&
rft.au=Pattie%2C+Ling-yuh+W.&
rft.title=&
rft.title=Electronic+resources.+Selection+and+bibliographic+control&
rft.stitle=&
rft.atitle=&
rft.date=&
rft.month=&
rft.volume=&
rft.issue=&
rft.number=&
rft.epage=&
rft.spage=&
rft.edition=&
rft.isbn=1000111849&
rft.eisbn=&
rft.au=Pattie,&
rft.auinit=L&
rft.pub=CRC+Press&
rft.publisher=&
rft.place=Boca+Raton&
rft.doi=&
rfe_dat=1196192673&
rfr_id=

Readings

Our readings this week by Kasprowski (2012), Johnson et al. (2015), and by Chisari et al. (2017) discuss link resolver technology, migration to new link resolver services, and methods to evaluate link resolver technology from both the systems and a user's perspective. It may not be necessary to learn how to hack your way through the OpenURL syntax, as I have above (or below: See Appendix A), or other aspects of link resolver URL formatting, but it is a good idea to acquire a basic understanding of how the URLs work in this process.

Let me re-emphasize that the key way that link resolvers work is by embedding citation metadata within the link resolver URL, including administrative metadata. This is another reason to have high quality metadata for our records, as our readings note. By implication, if we find, perhaps by an email from a library patron, that a link has broken in this process, it might be that the metadata is incorrect or has changed in some important way. Knowing the parts of this process aids us in deciphering possible errors that exist when the technology breaks.

For this week, see the ExLibres Alma link resolver documentation, which is the link resolver product used by UK Libraries. Let's discuss this documentation in the forum. I want you to find and explain other instances of link resolvers. Be sure to provide links to these examples and articulate ways the technology can be evaluated.

Documentation to read and discuss:

Link Resolver, Usage

Additional information

Appendix A

How I Enhanced Zotero by Hacking OpenURL

Since OpenURL compatible link resolver technology is partly based on query strings, as we have seen, we can glean all sorts of information by examining these URLs: the query string component that contains the metadata for the source but also the base component that contains the vendor and institutional information and also the URL type. When I worked on this section, I was able to learn that Primo/Alma uses two URL types to request resources: a search URL and an OpenURL. We can see this the URLs. The base search URL looks like this:

https://saalck-uky.primo.exlibrisgroup.com/discovery/search?

The base OpenURL differs just a bit (see the end of the URL):

https://saalck-uky.primo.exlibrisgroup.com/discovery/openurl?

The base search URL appears when searching the university's discovery service. However, the OpenURL only appears when needed and during transit between the source and before reaching the target: e.g., after clicking on a View Now @ UK link and before being redirected to the full text version. I copied my institution's specific OpenURL when I clicked on a View Now @ UK link and before it redirected to the EZproxy page.

My students often identify great problems to solve or are the source of great ideas. In a previous semester, one of my students in my electronic resource management class noticed that Zotero has a locate menu that uses OpenURL resolvers to look up items in a library. By default, Zotero uses WorldCat, but it can use a specific institution's OpenURL resolver. I had completely forgotten about this. When I investigated whether my institution was listed in the Zotero locate menu, I found that it was not nor was it listed on Zotero's page of OpenURL resolvers.

At the time, I didn't know what my institution's exact OpenURL was, but I was able to figure it out by comparing the syntax and values from other Primo URLs listed on Zotero's page of OpenURL resolvers. By comparing these OpenURLs, I was able to derive my institution's specific OpenURL (base component plus institutional info), which is:

https://saalck-uky.primo.exlibrisgroup.com/discovery/openurl?institution=01SAA_UKY&vid=01SAA_UKY:UKY

I added that to Zotero, and it worked, and then I posted the OpenURL info to Zotero's forum, and they've added it to their OpenURL resolver page. If others are curious about how to add this info to Zotero, another library has created a video on this. The directions cover adding a specific OpenURL to Zotero and on how to use Zotero's Library Lookup functionality.

Appendix B

A Basic URL

I mentioned query strings above. Theses are a part of a URL that include instructions to query engines, database, or websites (like Wikipedia). The parameters (i.e., search terms) are part of a query string, too. It's also important to understand the base part of a URL (link) because the link in link resolver is the part of the whole process. A URL for an article can looks like this:

https://www.emerald.com/insight/content/doi/10.1108/10650740510632208/full/html

This URL contains the following components:

  • https:// : indicates the secure hypertext transfer protocol (HTTP)
  • www : indicates the subdomain
  • emerald : indicates the second level domain name
  • .com : indicates the top level domain

Under a standard configuration, the rest of the URL indicates directory (or folder) location information on the emerald.com server. The following suggests that the article is seven directories (or folders) deep on the emerald.com server:

  • /insight/content/doi/10.1108/10650740510632208/full/html

The DOI (digital object identifier) for this article is part of the above URL and is specifically 10.1108/10650740510632208. The DOI is composed of a prefix and a suffix. The prefix includes the following elements:

  • 10 : this is the directory indicator
  • 1108 : the registrant code for this specific journal

The suffix refers to the following element:

  • 10650740510632208 : a character string (in this case, of numbers) that refers to the article. This string is created by the registrant

The DOI itself can be used to create a permanent URL for the above work be adding a https://doi.org/ to the beginning:

https://doi.org/10.1108/10650740510632208

Readings / References

Chisare, C., Fagan, J. C., Gaines, D., & Trocchia, M. (2017). Selecting link resolver and knowledge base software: Implications of interoperability. Journal of Electronic Resources Librarianship, 29(2), 93–106. https://doi.org/10.1080/1941126X.2017.1304765

Johnson, M., Leonard, A., & Wiswell, J. (2015). Deciding to change OpenURL link resolvers. Journal of Electronic Resources Librarianship, 27(1), 10–25. https://doi.org/10.1080/1941126X.2015.999519

Kasprowski, R. (2012). NISO’s IOTA initiative: Measuring the quality of openurl links. The Serials Librarian, 62(1–4), 95–102. https://doi.org/10.1080/0361526X.2012.652480

Additional References

McDonald, J., & Van de Velde, E. F. (2004, April 1). The lure of linking. Library Journal. Library Journal Archive Content. https://web.archive.org/web/20140419201741/http://lj.libraryjournal.com:80/2004/04/ljarchives/the-lure-of-linking/

Electronic Access

Introduction

Access is the paramount principle of librarianship, and all other issues, from censorship to information retrieval or to usability, are on some level derived from or framed by that principle of Access.

This week we devote ourselves to a discussion of electronic access. To start, let's begin with Samples and Healy (2014), who provide a nice framework for thinking about managing electronic access. They include two broad categories, proactive troubleshooting and reactive troubleshooting of access.

  • proactive troubleshooting of access: "defined as troubleshooting access problems before they are identified by a patron". Some examples include:
    • "letting public-facing library staff know about planned database downtime"
    • "doing a complete inventory to make sure that every database paid for is in fact 'turned on'
  • reactive troubleshoot of access: "defined as troubleshooting access issues as problems are identified and reported by a patron". Some examples include:
    • "fixing broken links"
    • "fixing incorrect coverage date ranges in the catalog"
    • "patron education about accessing full text"

The goal here, as suggested by Samples and Healy (2014), is to maximize proactive troubleshooting and to minimize reactive troubleshooting. The Samples and Healy (2014) report is a great example of systematic study. The authors identify a problem that had grown "organically," collected and analyzed data, and then generalized from it by outlining a "detailed workflow" to "improve the timeliness and accuracy of electronic resource work." Practically, studies like this promise to improve productivity and work flows and foster job and patron satisfaction. Such studies also help librarians identify the kinds of software solutions that align with their own workflows and patron information behaviors. If interested, I suggest reading Lowe et al., 2021 about the impact of Covid-19 on electronic resource management. Six authors individually describe access issues at their respective institutions and show how issues of pricing, acquisitions, training, user expectations, and budgets affect electronic access. I suggest reading articles like this in light of the framework provided by Samples and Healy (2014) because stories like these, about this impact of the pandemic on electronic access, can help guide us in developing proactive troubleshooting procedures minimize future issues, pandemic or otherwise, at our own institutions.

Samples and Healy (2014) say something important against a common assumption about electronic resources, particularly those provided by vendors:

The impression that once a resource is acquired, it is then just 'accessible' belies the actual, shifting nature of electronic resources, where continual changes in URLs, domain names, or incompatible metadata causes articles and ebooks to be available one day, but not the next (The Complexity of ERM section, para. 6).

Hence, unlike a printed work from the long ago print-only era that, once cataloged, may be shelved for decades or longer without major problems of access, electronic resources require constant and active attention to maintain accessibility to them. Ebooks, for example, can create metadata problems. For example, often what's important about scholarly ebooks, in particular, are the chapters they include, and hence metadata describing ebook components is important, along with providing links to those chapters in discovery systems. This difference between item-level cataloging and title-level cataloging, as Samples and Healy describe, can lead to confusing and problematic results when considering different genres and what those genres contain.

Or, note that they discuss how a series of links are involved starting from the source of discovery, e.g., an OPAC or a discovery layer, to the retrieved item, and how difficult it might be in determining which of these links and which of those services is broken when access becomes problematic.

Let me highlight a few key findings from their report:

  • Workflows: why does this keep coming up? It's because workflows help automate a process---simplify and smooth out what needs to be done, and because this is only possible when things are standardized.
  • Staffing: we'll discuss this more in another section, but part of the problem here is that ERM has had a major impact on organizational structure, but one where different libraries have responded differently. This lack of organizational standardization has its benefits regarding overall management practices and cultures, but it also has huge drawbacks---and that's the difficulty in establishing effective, generalized workflows that include key participants, and to minimize as many dependencies on any one person.
  • Tracking: if there's no tracking, there's no method to systematically identify patterns in problems. And if that's not possible, then there's no method to solve those problems proactively. It becomes all reactive troubleshooting, and reactive troubleshooting, as Samples and Healy indicate, results in poor patron experiences. We'll discuss tracking when we during the week on Evaluation and Statistics.

We commonly get the line that discovery systems are a great solution to all the disparate resources that librarians subscribe to. Or, if we do think about problems with such systems, we are often presented with a basic information retrieval problem, such that the larger the collection to search, the more likely a relevant item will get lost in the mix. Carter and Traill (2017) point out that these discovery systems also tend to reveal access problems as they are used. The authors provide a checklist to help track issues and improve existing workflows.

Buhler and Cataldo (2016) provide an important reminder that the mission of the electronic resource librarian is to serve the patron. This should remind us that the internet and the web have flattened genres. By that I mean they have made it difficult to distinguish among works like magazine articles, news articles, journal articles, encyclopedia articles, ebooks, etc. Though the Buhler and Cataldo (2016) reading is student-focused, other studies have hinted at the same issue they describe across other populations. It's important, if possible, to recognize these issues as ERM librarians and work to resolve them in the ways that you would be able to.

Myself, I grew up learning about the differences between encyclopedia articles, journal articles, magazine articles, newspaper articles, book chapters, handbooks, indexes, and dictionaries because I grew up with the print versions, which by definition, were tangible things that looked different from each other. Today, a traditional first year college student was born around the year 2004 and grew up reading sometime in the last decade. The problem this raises is that although electronic resources are electronic or digital, they are still based on genres that originated in the print age, yet they lack the physical characteristics that distinguished one from the other. E.g., what's the difference between a longer NY Times article (traditionally a newspaper article) and an article in the New Yorker (traditionally a magazine article) today in their online forms? Aside from some aesthetic differences between the two, they are both presented on web pages, and it's not altogether obvious, based on any kind of cursory examination, that we can tell, as regular users, that they're entirely different genres. However, there are important informational differences between the two, how they were written, how they were edited, how long they are, and who they were written by that might still lead us to consider them as different genres. Even Wikipedia articles pose this problem. Citing an encyclopedia article was never an accepted practice, but this was only true for general encyclopedias. It was generally okay to cite articles from special encyclopedias because they focused on limited subject matters like art, music, science, culture, and were usually more in-depth in their coverage. Examples include the Encyclopedia of GIS, the Encyclopedia of Evolution, The Kentucky African American Encyclopedia, The Encyclopedia of Virtual Art Carving Toraja--Indonesia, and so forth. There are studies that show that Wikipedia provides the same kind of in-depth coverage of some special encyclopedias, thus helping to flatten the encyclopedia genre, too.

The flattening holds true for things like Google. The best print analogy for Google is that of an index, which was used to locate keywords that would refer to source material. The main difference between these indexes and Google is that the indexes were produced to cover specific publications, like a newspaper, or specific areas, like the Social Science Citation Index or the Science Citation Index, both of which are actual, documented, historical precursors to Google and to Google Scholar. But today, these search engines are erroneously considered source material (e.g, "I found it on Google"). Few, I think, would have considered a print index as source material, but rather as a reference item, since it referred users to sources. Nowadays, it's all mixed up, but who can blame anyone.

Example print indexes:

Access and Authentication

Much of what exists in a library's electronic collections is paywalled, therefore librarians use software that authenticates users before they acquire access. This is generally required in agreements with content vendors.

There are two main technologies used to authenticate users. The first is through an IP / proxy server. Here, EZproxy (OCLC) is the main product in this arena, and in fact we use EZproxy at UK. When we access any paywalled work, like a journal article, you may notice the ezproxy.uky.edu string of text in a URL. For example, the following is an EZProxy URL:

https://www-sciencedirect-com.ezproxy.uky.edu/science/article/pii/S030645730500004X

The interesting thing about this URL is that it has a uky.edu address even though the article is in a journal that's hosted in Elsevier's ScienceDirect database. The www-sciencedirect-com part of the address is a simple subdomain of ezproxy.uky.edu (you can tell because the components are separated by dashes instead of periods), As a subdomain, it is no different than the www in www.google.com or the maps in maps.google.com. The original URL is in fact:

https://www.sciencedirect.com/science/article/pii/S030645730500004X

As opposed to the first URL, the interesting thing about the original URL is that it is in fact a sciencedirect.com address. Even though "sciencedirect" appears in the uky.edu URL, it is not a "sciencedirect.com" server. They are two different servers, from two different organizations, and are as different as uky.edu and google.com.

The reason we read an article or some other paywalled content at a uky.edu address and not at a, e.g., sciencedirect.com address is because of the way proxy servers work. In essence, when we make a request for a resource, like a journal article or a bibliographic database, that's provided by a library, our browser makes the request to the proxy server and not to the original server. The proxy server then makes the resource request to the original server, which relays that content back to the proxy server (EZproxy), which then sends the content to our browser. This means that when we request an article in a journal at sciencedirect.com or jstor.com, our browser never actually makes a connection to those servers. Instead, the proxy server acts as a go-between. See Day (2017) for a more technical and yet accessible description of the process.

Proxy servers provide access either through a login server or based on the user's IP address. If we're on campus, then our authentication is IP based, since all devices attached to the university's network are assigned an IP from a pre-defined range of IP addresses. This makes access to paywalled content fairly seamless, when on campus.

If we are off-campus, access is authenticated via a login method to the proxy server. When we attempt to access paywalled content from off-campus, we will see an EZproxy login URL. This looks something like this for accessing the ScienceDirect database:

http://ezproxy.uky.edu/login?url=https://www.sciencedirect.com

Aside from ScienceDirect, you can see a list of other subscribed content that requires EZproxy authentication here:

https://login.ezproxy.uky.edu/menu

The second main technology used to authenticate and provide access is based on what is called SAML authentication. The main product that provides SAML authentication for libraries is OpenAthens.

Unlike a proxy / IP authentication process, SAML is a type of identity verification system. Under this method, libraries offer a single sign-on process, and once authenticated, patrons have access to all SAML ready content or service providers. The process is similar to the Duo Single Sign-On service our university uses for authentication. In the OpenAthens case, users are authenticated via an identity provider, which would be the library. The library provides identification by connecting to its organization's identity management system, such as adfs, or Active Directory Federation Services. Once a patron has been authenticated, a confirmation is sent to the content provider, which then provides access to the content to the patron. For more details, see What is SAML? and this detailed OpenAthens software demo.

One of the benefits of this method is that URLs are not proxied, which means that content is not delivered to the patron from a proxy server like EZproxy. Instead, patrons access the original source directly. From a patron's perspective, this makes sharing URLs nicer. As far as I can tell, one of the downsides might be privacy related. With a proxy server, users don't access the original source, but instead the source is delivered through the proxy server, which by definition, masks the patron's IP address and browser information. This wouldn't be true under the SAML method.

Conclusion

The Samples & Healy (2014) and the Carter & Traill (2017) articles address troubleshooting strategies with electronic resources. One additional thing to note about these readings is how the organizational structure influences workflows and how the continued transition from a print-era model of library processes to an electronic one remains problematic. Even once that transition is complete, both readings make the case that strategy and preparation are needed to deal with these issues. The Buhler & Cataldo (2016) article shows how confusing e-resources are to patrons and how the move to digital has complicated all genres, or "containers", as the authors name them. Such "ambiguity" has implications not only for how users find and identify electronic resources but on how librarians manage access to them.

I added the EZproxy and OpenAthens content in order to complete the technical discussions we have had in recent weeks on integrated library systems, electronic resource management systems, link resolvers, and standards. These authentication and access technologies complete these discussions, which, altogether, cover the major technologies that electronic resource librarians work with to provide access to paywalled content in library collections. Both technologies aim to provide seamless access to paywalled content, as nearly as seamless as accessing content via a search engine or other source. Although neither will never be able to offer completely seamless access as long there are paywalled sources in library collections, the job of an electronic resource librarian is often to make sure they work as well as possible. This will often mean working with vendors and colleagues.

References

Samples, J., & Healy, C. (2014). Making it look easy: Maintaining the magic of access. Serials Review, 40, 105-117. https://doi.org/10.1080/00987913.2014.929483

Carter, S., & Traill, S. (2017). Essential skills and knowledge for troubleshooting e-resources access issues in a web-scale discovery environment. Journal of Electronic Resources Librarianship , 29(1), 1–15. https://doi.org/10.1080/1941126X.2017.1270096

Buhler, A., & Cataldo, T. (2016). Identifying e-resources: An exploratory study of university students. Library Resources & Technical Services, 60, 22-37. https://doi.org/10.5860/lrts.60n1.23

Additional Readings / References

Breeding, M. (2008). OCLC Acquires EZproxy. Smart Libraries Newsletter, 28(03), 1–2. https://librarytechnology.org/document/13149

OCLC. (2017, September 22). EZproxy. OCLC Support. https://help.oclc.org/Library_Management/EZproxy

OpenAthens transforms user access to library resources, replacing EZproxy and IP address authentication. (2021, June 2). About UBC Library. https://about.library.ubc.ca/2021/06/02/openathens-transforms-user-access-to-library-resources-replacing-ezproxy-and-ip-address-authentication/

Botyriute, K. (2018). Access to online resources. Springer International Publishing. https://doi.org/10.1007/978-3-319-73990-8

Day, J. M. (2017, April 25). Proxy servers: Basics and resources. Library Technology Launchpad. https://libtechlaunchpad.com/2017/04/25/proxy-servers-basics-and-resources/

Lowe, R. A., Chirombo, F., Coogan, J. F., Dodd, A., Hutchinson, C., & Nagata, J. (2021). Electronic Resources Management in the Time of COVID-19: Challenges and Opportunities Experienced by Six Academic Libraries. Journal of Electronic Resources Librarianship, 33(3), 215–223. https://doi.org/10.1080/1941126X.2021.1949162

Chapter Three: Processes and Contexts

This chapter will be updated (8/20/2022).

Workflow

If all goes according to plan, this week's readings on electronic resource management and on workflow analysis should help put prior material into context and act as a bridge to the material we will discuss in much of the remaining weeks of the semester.

In the first three weeks of the semester, we learned about and discussed:

  • what it means to be a librarian who oversees or is a part of electronic resource management,
  • what kinds of criteria are sought for in new hires, and
  • why electronic resources have introduced so much constant disruption across libraries.

This latter point is largely due to, among other things, the fact that the print-era involved largely (or at least more so) a linear process of collection management that was fundamentally altered with the introduction of electronic resources.

Then we discussed the differences between:

  • electronic resource management software and
  • integrated library system software.

We also learned about:

  • technical and workflow standards

And we discussed:

  • why both types of the above standards are important,
  • why interoperability is required, and
  • what happens when access to electronic resources break.

This week things will start to make connections at a faster pace. In the first Anderson article (chapter 2), we gain a clearer idea of what a knowledge base is and how it works, and we learn more about how integrated library systems and ERM systems work together (or fail to). We also dip our toes into newer topics like licensing, COUNTER, and SUSHI, which we'll cover in greater detail in the next few weeks.

In the second Anderson article (chapter 3), we learn how to take careful consideration of a library's work flow before selecting which ERM software to purchase. This is why workflow based standards are important, even if they are not true technical standards. We do this because we select systems based on the needs of the librarians, which may be vastly different across libraries, and which must rely on different aspects of the overall process. As you read this chapter, I want you to keep in mind the Samples and Healy article from the previous week's reading.

As hinted at in these readings, especially in the section on acquisitions, budget, subscription, and purchasing in chapter two but also in the multiple discussions about the role vendors play in electronic resource management, the market and the economics of this area of librarianship weigh heavily on everyday realities. We will follow up on this next week when we begin to read more about the market and the economics of electronic resources. For example, in our Anderson readings this week, we learn about the CORE recommended practice (RP), or the Cost of Resource Exchange, that was developed by NISO. CORE brings together three aspects of our previous discussions: software, funds, and interoperability. Here the CORE RP describes how the ILS and ERM systems can communicate the costs of electronic resources between each other. Its existence hints at the pressures librarians have had in having to deal with complex budget issues. Although this is not touched on in these articles, the current pandemic will make these issues more complicated for libraries.

While we spent time discussing technical standards, we also learned about TERMS, an attempt to standardize the language and processes involved with electronic resource management. We see more connections in this week's readings. Aside from the CORE standard, we learn more about standard attempts at licensing, and the COUNTER and SUSHI statistic-related standards, which provide standards for the communication, collection, presentation, and the formatting of usage statistics for electronic resources such as ebooks, journals, databases, and more.

We have also discussed interoperability, and what it takes for multiple systems to connect and transfer content between each other. We primarily discussed this with respect to link resolver technology, and we did this not just because we should know about link resolvers as important components of electronic resource management, but also because link resolvers are a good example of the kind of work that is involved for systems to communicate properly. There are other forms of interoperability, though, and coming back to CORE again, the Anderson article (chapter 2) provides a link to a white paper titled, White Paper on Interoperability between Acquisitions Modules of Integrated Library Systems and Electronic Resource Management Systems, and this paper defines the 13 data elements that were determined to be desired in any exchange between ILS software and ERM software for these software to communicate usefully with each other. By that, I mean, the data points enable meaningful use of both the ILS software and the ERM software, and include:

  • purchase order number
  • price
  • start/end dates
  • vendor
  • vendor ID
  • invoice number
  • fund code
  • invoice date
  • selector
  • vendor contact information
  • purchase order note
  • line item note
  • invoice note (White Paper ...)

That white paper contains example and worthwhile use cases and stories from major libraries, and these cases are rather helpful reads. You are not required to read this paper, but I urge you to skim through it to get a sense of how standards are created through a process of comparing and contrasting and coordinating needs and contexts among different entities. The link is in the transcript but also in the reading, as well as the link to the actual CORE protocol:

I really like our two readings by Anderson because they are illustrative of the whole ERM process. If you are able, visit the issue these two readings are from and read the other chapters that Anderson has written.

In short, this week's topic will also, if all goes well, help provide a foundation for the remaining weeks, when we will learn about and discuss things like licensing and negotiation and evaluation and statistics in more detail. Think of this week as a transition between the first part of the semester and what happens next.

Market and Economics

Add:

Grain, A. the. (2022, July 18). Legally Speaking—States Unsuccessful in Providing Financial Relief of eBook Terms for Libraries. Charleston Hub. https://www.charleston-hub.com/2022/07/legally-speaking-states-unsuccessful-in-providing-financial-relief-of-ebook-terms-for-libraries/

Add publisher's perspective:

Sisto, M. C. (2022). Publishing and Library E-Lending: An Analysis of the Decade Before Covid-19. Publishing Research Quarterly, 38(2), 405–422. https://doi.org/10.1007/s12109-022-09880-7

This week we're reading a piece on the ebook market (Sanchez, 2015), and one on the academic journal market (Bosch, Albee, & Henderson, 2018). We'll learn how each impacts library budgets.

To understand Sanchez's article, we need to address some copyright topics. In the next part of this lecture, I'll discuss copyright, the first sale doctrine, and show how digital works have disrupted some basic ways that libraries function. The article by Bosch, Albee, and Henderson discusses a similar case among academic libraries, but with academic or scholarly journals as the focal point. I'll address some of the citation metrics they discuss.

Copyright law grants a monopoly to the person or corporate owner of an intellectual property. That is, the copyright owner has exclusive rights over the material that they own, where they could be a person or an organizational entity. Section 106 of the law grants copyright owners the following rights:

(1) to reproduce the copyrighted work in copies or phonorecords;

(2) to prepare derivative works based upon the copyrighted work;

(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;

(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;

(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and

(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.

Source: Copyright Section 106

These exclusive rights are all encompassing, and intentionally designed to allow copyright owners a monopoly of their property. In principle and under some constraints, this is a good thing. However, there are some implications that we should consider.

In short, if those exclusive rights were followed without limitations, then it would mean that the exchange of money for a work between a copyright holder and a buyer for something like a printed book or a DVD would not entail a transfer of ownership of that physical copy; that is, it would not allow the buyer of the physical item any distribution rights of the item once the first exchange has been made. Under such a scenario, libraries would be able to buy physical books but would not be able to lend them.

The First Sale Doctrine helps avoid the issue granted by the full-blown list of exclusive rights granted by copyright ownership. Because of the first sale doctrine, made a precedent in the early 20th century and then codified into law in 1976, you, I, or a library may buy a physical copy of a work, like a book, a DVD, a painting, and literally own that specific copy. First sale doctrine does not grant us reproduction rights, as they are listed in Section 106 of the copyright law, but it does allow us to distribute the singular, physical representation or embodiment of the work that we have purchased. Thus, this first sale doctrine is why libraries were able to thrive throughout the 20th century, lend material, and preserve it. More mundanely, it's also why I can buy a book at a bookstore and later give it away or sell it to someone after I've finished it.

The digital medium makes things messier, as it tends to do. There are two big reasons for this. First, digital works are not subject to the same distributions constraints that physical works are subject to, and the first sale doctrine is about distribution rights and not reproduction rights. If I have a physical copy of some book and give you my copy of that book, then I no longer have that copy. However, if I have a copy of a digital file, then as we all know, it's relatively trivial for me to share that file with you without losing access to my own copy. Since digital works can be copied and distributed without anyone losing access to their copies or even to the original, the First Sale Doctrine does not apply. In the digital space, there are far fewer limitations on supply, including on lending.

Second, many digital works are like software, or at least, they are fully intertwined with the software needed to display them. This is true for all kinds of documents, like HTML pages, which need a web browser or text editor to read them and perhaps also other technologies, like JavaScript; or audio files, which need some a media player to listen to them. But let's consider ebooks as an example. Ebooks arrive in all shapes and sizes. Project Gutenberg distributes ebooks that are in the public domain and in various file formats that include plain text documents that have no presentation markup like bold, italics, and like, HTML documents with markup, XML documents like EPUB, and then also PDFs and others. Why so many file formats? Text is text, right, and in the print space, a book is simply printed on a page, even if it's sometimes printed on different sized pages or using different type settings. But these various markups exist because they each offer technological or presentational advantages and are often tied to specific pieces of software.

This is especially true for proprietary file formats, like the ones that Amazon created for use only on Kindles, or the popular MP3 file format for audio recordings that only recently became patent free. While file formats like these may not be necessarily counted as software, depending on how we define software, but more like data structures, it is certainly true that file formats and the specific software applications that display can be completely intertwined. If you are old enough, you may remember the headaches caused with files created as .doc in some early version of Microsoft Word that later failed to display properly in a future Microsoft Word version or in some other word document software. WordPerfect 5.1 was a popular word processing applications in the 1990s, and it's not clear if files created with that application, or other popular word processing applications at that time, would open today, at least without intervention.

In short, these complexities introduce obstacles to the first sale doctrine and raise other copyright issues because of the connection to software, which is also often copyrighted. The result is that copyright holders and publishers have little financial interest in selling actual digital copies of works, since they cannot prevent future distribution without special technologies, and instead are more motivated to license material and sometimes explicitly tie that material to specific pieces of software and hardware, like the Kindle, which would have to be bought, and that adds additional expense.

What does this mean for libraries in the digital age? It means that libraries buy less and rent or license more, and renting means that they continually pay for something for as long as they want access to it. As Sanchez (2015) puts it, "At its simplest, this takes the form of paying x dollars per year per title during the length of the contract." When the total supply of works increases, e.g., the total number of published books increases, as they do each year, then it means renting more and more without ever completely acquiring. When budgets are cut or are stagnant, this ultimately means a decline in the collection a library has to offer, or if not a decline in the collection, then cuts in some other areas of a library, like the number of librarians or other staff. This is the conundrum that Sanchez raises in his article.

If that alone were the issue, maybe librarians could discern other sustainable ways to proceed, but Sanchez raises additional issues and questions: what if publishers raise the prices for digital content at an annual rate faster than what they already raise for print content (reasonable assumption)? If so, does that mean that librarians will be able to afford fewer titles, digital or print, unless they raise their budgets, and, as they weed, how would that impact the physical space of the library? See figure 2.3, specifically, from Sanchez's article. The plot shows just how much could be lost and how little gained if the forecasts Sanchez discusses come true.

Keep all of this in mind as you process Sanchez's article. You can even connect it to some discussions you've already had about accessing digital content. Specifically, there are many ways to put constraints on the supply of an item in the digital landscape, as opposed to limiting supply in the physical space, which include fewer methods. That is, it's relatively easy for publishers and others to restrict the supply of physical works. They simply have to limit how many of those physical works are manufactured (e.g., the number of print runs). But given the nature of digital content, restricting supply is driven by the technologies available to do so, and since there are so many publishers and distribution points, then each one of these points will often create their own unique type of constraint on the supply. The result is that there will be a number of confusing methods implemented to limit constraint, even if these limitations are marketed as selling points. In practice, this may mean that only a limited number of people may "check" out a work from a library at one time, or access a database at one time, and so forth. Thus, the budget issue has an impact on access and usability.

As a bonus, the last time I taught this course, I reached out to Joseph Sanchez and interviewed him. Please enjoy the interview! It's long but it was a very fun exchange.

Read more about copyright:

https://www.copyright.gov/title17/92chap1.html

Although ebooks likely represent the biggest impact on public library budgets, academic libraries are largely concerned with scholarly journals. Like Sanchez (2015), Bosch, Albee, & Henderson (2018) show that the major issue here is that academic library budgets are declining or holding flat, even though prices continue to increase for journal titles and though the number of published articles increase. This raises an interesting phenomenon---that although researchers are hurt by the lack of access to research, researchers are also part of the cause of the increase simply because they continue to publish more and more. Ironically, the result of that rate of increase is less access for many.

The authors also note that part of the drive to publish includes a drive to publish in so-called prestigious journal titles, where prestigious is determined by how well cited the title is. The authors refer to a few citation-based metrics that the research community uses to determine prestige. These include the long-established Impact Factor, which can be examined in the Journal Citation Reports (JCR) provided by Clarivate Analytics, as well as newer ones, such as the Eigenfactor and the Article Influence Score, which can also be examined in JCR (the eigenfactor.org site is not well updated, at the time of this writing).

One motivation for using a citation metric as the basis of evaluating journal titles is because citation metrics indicate, at some level, the use of the title. That is, a citation to an article in a journal title means, ideally, that the authors citing that article have read the article. Historically, when Eugene Garfield invented the Impact Factor, it was partly so as a tool for librarians to use in collection management because he recognized this use-based theory of citations.

However, citation metrics should never be the sole or even primary tool for such purposes, though. While they may provide good information, there are many caveats. First, there are different fields of research, and some fields cite at different rates and at different volumes than other fields, and also for different reasons. This is why, in Table 5 of the Bosch, Albee, and Henderson (2018) article, the cost per cite for journals in the Philosophy & Religion category are so much higher that the cost per cite of titles in other categories. Authors in P&R simply have different citation and publishing behaviors than authors in other categories. Second, citations do not capture all uses of a journal. For example, there are many journal titles that I might use in my courses but may not use in my research, and this is true for other faculty, yet citation metrics won't reflect that kind of use. The authors refer to altmetrics, which was invented to help capture additional non-citing uses of scholarly products, but altmetrics is still in its infancy and is largely dependent on data sources and scholarly behavior that are problematic themselves. Third, there are various issues with the metrics themselves. The Impact Factor is based on a calculation that is outdated and not a very appropriate statistical measure. The other calculations were created to address that but may have other problems. And four, the use of the metrics, regardless of which one, tends to drive publishing behavior---such that journal titles with higher metrics tend to attract more submissions and more attention, thus driving more citations to them. Thus, citation based metrics are comparable to a kind of capitalist economic system where, as the sociologist of science Robert Merton noted, the richer get richer (in citations) and the poor get poorer. The issue then is that prestige, defined in this way, does not necessarily indicate quality---just use.

The authors also discuss some issues with Gold Open Access and the idea that Gold OA may compound the cost problem. This is where authors pay a publication fee, or an article processing charge (APC), once a manuscript has been submitted and accepted by a journal (there are other types of Gold OA cost models). We can do a quick off the cuff and rough calculation to see why this might compound the problem. As an example, PLOS ONE is one of the largest gold OA journals and charges an APC of $1,695 USD (that's $100 more than it cost about a year ago). In 2018, 32 papers were published in PLOS ONE that included at least one author from the University of Kentucky, totaling $51,040 in APCs for the 50 total institutions that were associated with these papers. Thus, this amounts to about $1020 per institution, paid for by the authors and not libraries. For UK authors, this also amounts to over $32,640 spent on APCs (32 * $1020). This is about $27K more than the average price of the most expensive category, Chemistry, as reported in Table 1 of the reading. I'll leave it at that.

In a follow-up video, I'll demonstrate some tools used to examine the discussed metrics.

References

Bosch, S., Albee, B., & Henderson, K. (2018). Death by 1,000 Cuts. Library Journal, 143(7), 28–33.

Sanchez, J. (2015). Chapter 2. Forecasting Public Library E-content Costs. Library Technology Reports, 51(8), 9–15. Retrieved from https://journals.ala.org/index.php/ltr/article/view/5833

Licensing Basics

This week we start two week's of discussion on licensing. Licensing is at the top of the list of most important aspects of electronic resource management. We start our coverage now because it was necessary to learn about the technical and the economic aspects of ERM work as a prerequisite.

First, we should not that there is a hierarchical difference between copyright law and contract law. In short, although copyright is a kind of temporary right (ante public domain status), unlike other types of rights (e.g., legal or civil rights), it is a right that can be transferred or sold by way of a contract. Contract law, thus, supersedes copyright law.

For example, licensing agreements offer copyright or intellectual property owners a contractual framework. This framework functions as a contractual agreement among two or more parties, and it enables the parties involved to participate, within some range of time (all contracts must have starting and ending dates), in an owner's intellectual property under certain conditions. Librarians enter into licensing agreements of all kinds. The types of things that are licensed include bibliographic databases, ILS/ERM software, and of course, e-content. Unfortunately, regarding the latter, entering a licensing agreement for e-content means that libraries do not own that content but only have access for a period of time, as defined in the contract. This is unlike print works, which fall under the first-sale doctrine. That is, once a library ones, for example, a physical book, it owns it for as long as they want to or can own it. Basically, the existence of a licensing agreement between a library and an intellectual property owner entails lack of ownership of the item (think of item as defined by the Functional Requirements for Bibliographic Records FRBR model).

The readings are pretty straightforward, but let's preview some basics, which are nicely outlined in Weir (2016) (not in the reading but referenced below):

  • There are two general types of licensing agreements:
    • End user agreements are generally the kind that people accept when they use some kind of software or some service.
    • Site agreements are the kinds of agreements librarians get involved in when they negotiate for things like databases. Here, site refers to the organizational entity.
  • There are several important parts of a standard license. They include:
    • Introductions: this includes information about the licensee and the licensor, date information, some information about payments and the schedule.
    • Definitions: this section defines the major terms of the contract. Weir includes, as examples, the licensee, the licensor, authorized user, user population, and whether the contract entails a single or multi-user site.
    • Access: This covers topics such as IP authentication and proxy access.
    • Acceptable use: Included here are issues related to downloading, storage, print rights, interlibrary loan (ILL), and preservation.
    • Prohibited use: What cannot people do: download restrictions, etc.
    • Responsibilities: What the licensee's (the library) responsibilities. Be careful about accepting responsibility for actions that the library would have a difficult time monitoring. Then also, what are the licensor's responsibilities. This might include topics such as 24 hour access.
    • Term and terminations: Details about the terms of the contract and how the contract may be terminated. Be aware that many libraries are attached to either municipal, county, or state governments and must adhere to relevant laws.
    • Various provisions

As an example, the California Digital Library, via the University of California, provides a checklist and a copy of their standard license agreement.

The checklist covers four main sections and is well worth a read:

  • Content and Access
  • Licensing
  • Business
  • Management

We have three additional readings this week. One of the readings covers SERU: A Shared Electronic Resource Understanding, by NISO. We also have a short article by Regan that provides some guidelines on becoming competent on licensing.

SERU, Shared Electronic Resource Understanding, is a NISO collaborative document that helped standardize some aspects of the licensing process and can be used as "an alternative to a license agreement" if a provider and a library agrees to use it. Like the standard licensing structure that Weir (2016) outlines, SERU includes parts that describe use, inappropriate use, access, and more but also posits other stipulations, such as confidentiality and privacy.

The other reading is the NASIG Core Competencies for Electronic Resources Librarians. We have already visited this, and although it doesn't specifically cover licensing, I added it to this week's reading list for a couple of reasons. First, it's a reminder that when we talk about electronic resource management, we talk about a comprehensive list of responsibilities, skills, technologies, and more, and I need to keep this on our radar. Second, because the Regan reading specifically mentions these competencies, and I thought it would good to reintroduce them here.

This week, after reading the material, I want you to focus on the Regan article and some of the questions raised there. Regan raises important questions about the licensing process, about effective communication, and about advocacy. I want you to comment on these issues, and I want you to answer some questions that Regan raises. You can do that by searching the web and library websites. In fact, at the beginning of the semester I asked you to subscribe to the SERIALST email list. It's now time to draw upon that and any discussions you've seen in those lists that are related. You can usually search the archives of those lists, especially since traffic has been a bit light recently. Regan mentions some other sources, such as LIBLICENSE and copyrightlaws.com. Any of these are fair game for our discussions, but the latter is a commercial entity that provides material and tutorials for a kind of tuition. It might be useful to know about it, but explore only if you want. However, LIBLICENSE provides model licenses as well as links to additional model licences, including the above mentioned California Digital Library Standard License Agreement. The LIBLICENSE model license includes even more details, such as types of authorized uses, and details that include provisions on:

  • course reserves
  • course packs
  • electronic links
  • scholarly sharing
  • scholarly citation
  • text and data mining

References

Regan, S. (2015). Lassoing the Licensing Beast: How Electronic Resources Librarians Can Build Competency and Advocate for Wrangling Electronic Content Licensing. The Serials Librarian, 68(1–4), 318–324. https://doi.org/10.1080/0361526X.2015.1026225

Weir, R. O. (2012). Licensing Electronic Resources and Contract Negotiation. In R. O. Weir (Ed.), Managing electronic resources: a LITA guide. Chicago: ALA TechSource, an imprint of the American Library Association.

Licensing and Negotiating

Since we are watching a long video again and also have a solid number of readings, I will keep this lecture short.

Let's talk about Abbie Brown, first. In her talk, note that she highlights the middle ground on:

  • principled negotiation
  • assertive communicators

She also discusses, based on her experience, some stereotypes that get in the way of negotiating. These include:

  • Librarian stereotypes: Years ago, I was at a dinner with a friend of mine and their family when my friend mentioned me going to library school. The father of my friend then said, "isn't that women's work?" So the existence of this stereotype is true, and if applicable to you, I hope you do not have to deal with it. My advice is to stick with Abbie Brown's suggestion to focus on principled negotiation and assertive communication. And if you're judged based on other stereotypes, please don't tolerate it.
  • Library stereotypes: libraries are major players in the economy and have important economic buying power. Know that you come from a position of power.
  • Vendors stereotypes: these are just as incorrect. As Brown notes, the people who work for vendors often have Master's degrees from library science programs. In fact, I went to library school with people who now work for vendors, and students in this class have and do work for vendors. These people are certainly aware of the complexities surrounding libraries and librarians and the values that librarians uphold. So you often will have a lot of mutual understanding from the start and should not necessarily have to assume any baseline antagonism.
  • In the end, try work against any and all of these stereotypes. I think that if you do this, then it will reduce a lot of anxiety that we might carry with us when we go to the negotiating table.

I like that also that Abbie Brown stresses the importance of building and maintaining relationships:

  • Focus from the beginning on building and maintaining a good relationship with your vendor. There are benefits to this, as Abbie Brown lists.

Brown's practical suggestions:

  • talk through things, with colleagues and vendors
  • write well and succinctly, and put in writing (and be careful what you put in writing)
  • I know that some of you have already had hesitations about the licensing agreements we've discussed already. Brown covers this in some detail and in real world ways and discusses how, e.g., SERU, is not always used but is still helpful to have out there as a point of reference.

In addition to Brown's talk, we also read a few articles. The Smith and Hartnett article provides a nice real world example of the negotiating process that includes a work flow around licensing (again, the workflow!). Remember, document everything and revisit your documentation. That's how formalized checklists come into being and why they're useful. Having a workflow in place around licensing will help make your work more efficient and help ensure that all bases are covered.

The Dygert and Barrett article covers the specifics of licensing---what to look for, what shouldn't be given away, how to negotiate principally, and more. Likewise, the Dunie article gets into the specifics of the negotiation process and includes definitions of terms, business models, and strategies.

This week will not necessarily prepare us to become negotiators. The main point I want to make this week is this: if you find yourself in a position where one of your job responsibilities is to negotiate with vendors for e-resources (or for anything else), then come back to these sources of information and spend additional time studying them and taking notes on them. Sources like these, and others like them in the literature, will prepare you if you study them.

Acquisitions and Collections Development

Note: Add resources: https://openstax.org/subjects

Class, this week we discuss aspects of collection development and acquisitions, which is a complex problem for electronic resources. To rehash, in the print-only days, acquiring resources was a more-than-it-is-now linear process. Librarians became aware of an item, sought reviews of the item, possibly collected the item, described it, and then shelved it. And maybe, depending on the type of the library, weeded it from the collection at some point in the years to come during their regular course of collection assessment.

There are additional vectors to be aware of with electronic resources. First, libraries may or may not necessarily own digital works, but different subscription services require different kinds of contracts, as we have learned. Second, electronic resources (ebooks, journal articles, databases, etc.) require different handling and disseminating processes due to technological and licensing barriers. Martin et al. (2009) truly nail the issue in their article where they write that:

As much as we would like to think our primary concerns about collecting are based on content, not format, e-resources have certainly challenged many long-established notions of how we buy, collect, preserve, and provide access to information (p. 217).

Although a world where the format dictates so much makes an intriguing world, it can be problematic and worrisome. We think that content should be king, but we must ask, how does format either prevent or facilitate access? If you catch the implicit gotcha there, you can see that we're building a thread between acquisitions, collections development, and usability, which we'll learn more about latter in the semester.

One thing about this week: in a collection development course, you would unquestionably focus on content and on the work that is involved creating a collection development policy (which I hope you do or spearhead if your library does not have one). I'll briefly discuss this soon. Those things are relevant to the acquisition and collection of e-resources. However, in a major way, one of the things you should take away from this week's reading is how much the management of electronic resources have impacted librarian work flows and how that has shaped, or reshaped, library organizational hierarchy. I'll provide an organizational chart for you to discuss, and we'll use it to discuss how electronic resources have shaped the organizational structure of the library.

The Lamothe (2015) article is good in a different way. Lamothe finds that if electronic e-reference sources are collected and perpetually updated, then they get continually used. If it's a static e-resource (compare, e.g., to a resource pushed out in PDF, although it could be in HTML), then usage declines. I hope additional studies pursue this line of questioning because it raises questions about the expectations that our patrons have about our content; perhaps something about how fresh they expect that content should be, where such an expectation may have existed in different shape in prior years. It also suggests that a resource like Wikipedia has an advantage, since much of Wikipedia is regularly updated, or the site in general appears broadly so since many parts of it are continually updated.

The Open Educational Resources (OER) issue is a hot topic these days. Textbook prices, as the article by England et al. (2017) notes, have skyrocketed in recent decades. Some textbooks cost hundreds of dollars, and the problem impacts both school and academic libraries. UK Libraries has a helpful page about Open Educational Resources. I'll provide a link here in the transcript and in the discussion prompt, and that page links to OER content for both types of libraries, including oercommons.org. Explore this information, and discuss whether libraries ought to collect and acquire these resources or (e.g., by adding records to them in their online public access catalogs or discovery systems), or should they not be involved at all? This seems like a duh kind of question, but libraries, public or academic, have not traditionally collected textbooks. Does this change the game for them as educational institutions?

A quick note about the organizational charts. The chart that I created was based on my readings of librarian departmental reports written during the late 1950s and early 1960s by librarians at UK. These reports are held at the University of Kentucky's Special Collections Research Center, which is fortunately in the building next to mine on campus. Organizational charts have been around since the 1800s, but I am not sure when libraries started to create and use them, and I didn't see one in my research on the history of UK Libraries for this time period. Thus, I inferred the organizational structure based on the detailed reports written by the various department heads in the library at the time. When you compare my chart based on the past to the current one provided by UK Libraries, I think you will be intrigued by how much more complicated the current chart is today.

This complexity is very interesting. The growth in electronic resources, associated technologies, and markets do not explain all of it: knowledge has become more specialized, and library organizational structure will reflect that; the student population has grown considerably since then, in size and heterogeneity, and library structure will reflect that; and the theory and praxis of library management has evolved throughout the decades, and library structure will reflect that. Other issues are at play, and it is certainly true that they are all interconnected, but I do think that technology and e-resources accounts for a large portion of what we see here, and I'm looking forward to reading what you have to say about this.

Finally, I'd be guilty of a serious wrongdoing if I did not discuss the importance of having a collection development policy (CDP) and using that policy to guide the collection, acquisition, and assessment of electronic resources. I don't mean to overlap any discussions you may have had about this if you've had a collection management course, I only want to emphasize the importance of a CDP for e-resources. Unfortunately, not all libraries, even at major institutions, create or use a CDP, and if you end up working at one, then I'd highly encourage you to convince your colleagues of its importance. A CDP should define a collection, and then include most if not all the following topics:

  • mission, vision, and values statement
  • purpose of CDP statement (scope may be included here)
  • selection criteria: this could be general but it could also include subsections that focus on specific populations, genres, resource types, and more
  • assessment and maintenance criteria
  • challenged materials criteria (esp important at public, K-12 libraries)
  • weeding and/or replacement criteria

Included in this transcript are links to two CDP policies, one from the University of Maryland and one from the Lexington Public Library. The Maryland CDP is not their main one but is a sub-CDP that focuses on electronic resources. The LPL policy is the main one for them, and although it does not include a long discussion of electronic resources, they are mentioned in the policy. Neither approach is wrong but are catered to the specific libraries and their purposes, communities, and vision statements. This week, I would like you to search on the web for more and to read through ones you find for how they discuss electronic resources.

Links:

References

England, L., Foge, M., Harding, J., & Miller, S. (2017). ERM Ideas & Innovations. Journal of Electronic Resources Librarianship, 29(2), 110–116. https://doi.org/10.1080/1941126X.2017.1304767

Lamothe, A. R. (2015). Comparing usage between dynamic and static e-reference collections. Collection Building, 34(3), 78–88. https://doi.org/10.1108/CB-04-2015-0006

Martin, H., Robles-Smith, K., Garrison, J., & Way, D. (2009). Methods and Strategies for Creating a Culture of Collections Assessment at Comprehensive Universities. Journal of Electronic Resources Librarianship, 21(3–4), 213–236. https://doi.org/10.1080/19411260903466269

Chapter Four: Patrons

This chapter will be updated (8/20/2022).

User Experience

Medieval helpdesk with subtitles

Dickson-Deane and Chen (2018) write that "user experience determines the quality of an interaction being used by an actor in order to achieve a specific outcome." Parush (2017) highlights relevant terms like human-computer interaction (HCI) and usability. Let's say then that HCI encompasses the entire domain of interaction between people and computers and how that interaction is designed, whereas user experience (UX) focuses on the quality of that interaction or an interaction with something (like a product). These are not precise definitions, and some might use the terms UX and HCI interchangeably. As ERM librarians, though, it is our job to focus on the quality of the patron's experience with our electronic services, and this entails understanding both the systems and technologies involved and the users interacting with these systems and technologies.

Dickson-Deane and Chen (2018) outline the parameters that are involved with UX. Let me modify their example and frame it within the context of an ERM experience for a patron:

  • Actor: a user of the web resource, like a library website
  • Object: the web resource, or some part of it
  • Context: the setting. What's happening? What's the motivation for use? What's the background knowledge? What's the action?
  • User Interface: the tools made available by the object as well as the look and feel. More specifically, Parush (2017) states that "the user interface mediates between the user and computer" and it includes three basic components:
    • controls: the tools used to control and interact with the system: buttons, menus, voice commands, keyboards, etc
    • displays: the information presented to the user or the hardware used to present the information: screens, speakers, etc.
    • interactions and dialogues: the exchange between the system and the user including the use of the controls and responding to feedback
  • Interaction: what the actor is doing with the UI
  • (Intended/Expected/Prior) User experience: the intended or expected use of the object. The user's expectations based on prior use.
  • (Actual) User experience: the actions that took place; the actions that had to be modified based on unintended results.

These parameters would be helpful for devising a UX study that involves observing patrons interacting with a system and then interviewing them to complete the details. Note that the same kind of systematic thinking can be applied to evaluate other user experiences, like those between a librarian and an electronic resource management system. Often the focus is on patron user experience, but it's just as important to evaluate UX for librarians and to consider UX when selecting an ERM system or an ILS system.

In any case, these parameters help us step through and highlight the complicated process of interacting with a computer, generally, or a library resource, more specifically. As with many other topics we've discussed this semester, we can also incorporate these parameters into a workflow for evaluating UX. But, it is because of the complexities involved and a focus on the systems that Pennington (2015) argues for a more UX centered approach to library website design. To emphasize their case, think about your own state of knowledge of ERM before you started to take this course. For example, a number of you have explored in depth how link resolvers function, and your experience using them as patrons and your understanding of their technical aspects as librarians have provided you with a set of skills and experiences that make you more likely to identify the cause of a malfunction if you find one. But with the ability to suss out an issue, it becomes easier to solve, and the experience itself involves less anxiety. Most users and patrons of these systems will not have any technical knowledge of these systems, and when they break for them, their frustration with the experience might lead to unfortunate outcomes. For example, they may not retrieve the information they need; they may not reach out to a librarian; they might stop using the library's resources in preference for something of inferior quality; and so forth. We need to not forget that this happens, and if possible, to likewise design for such failures of a system (because they will fail). How might we, for instance, design an action to occur when a link resolver or a proxy server fails to identify and retrieve a work?

Here's the crux. As you gain more skill and expertise with these systems, you will eventually lose the ability to see these systems as a novice user, and that distance will only grow over time. It is therefore, as Pennington (2015) argues, important to gather data from users. User experience research nurtures a user centered mindset.

The second article by Pennington, Chapman, Fry, Deschenes, and McDonald (2016) is a contribution from four librarians with theoretical and practical suggestions for UX research. Both Pennington and Chapman note the best way to measure UX or detect UX issues is to conduct research with users (recall our conversations on proactive troubleshooting), but this isn't always possible due to financial, time, or other constraints. However, there is a wealth of research on UX and in the absence of the ability to conduct research, locating prior research and applying that research to your setting is paramount. Drawing from the literature, Chapman describes several important UX principles that include:

  • chunking
  • highlighting and prominence
  • KISS
  • choice simplification
  • choice reduction

In addition to user studies and prior research, libraries possess a wealth of data to explore and this data could provide a lot of insight. The same caveat applies that Chapman described: if only you have time to analyze the data. But Fry points out that you can access library data about location of user, device type, resource use, and more. (There are some privacy issues involved with this, and we'll talk about them at the end of the semester.) Furthermore, here's where the Browning (2015) article is helpful. Whereas Fry describes what we can learn from data about usage, Browning describes what we can learn from data about breakage. Both kinds of data can offer us a substantial understanding about user experience.

If you can conduct a user study, then Deschenes offers helpful tips on recruiting users. Remember that the users that you recruit should be actual users of the systems. If you are interested in why some segments of the population do not use the library's resources, then that would be a different kind of study.

I agree with McDonald that despite having around 30 or so years of experience with web-based electronic resources, more with other types of electronic resources that existed prior to the web and that were based on the then internet, on optical disks, etc., we are still in the throes of disruption. There's a lot yet to learn about design for the web, just like there's a lot of left to learn about how to design a home or office, and nothing will be settled for a while. Although I doubt if there will be any single dominate user experience or user interface, since there are many cultures and backgrounds and experiences, I'm fairly sure the low-hanging fruit problems will work out eventually. Remember though that 95% of the cause of all of this is due to copyright issues, which necessitate the entire electronic resource ecosystem and the complications that are introduced by having to work with vendors who work with different, but overlapping, publishers, etc. If something were to change about copyright, then it's a whole new ballgame.

On a final note, you might be wondering how information seeking is related to HCI and to UX. Anytime we interact with a computer (broadly speaking) in order to seek information, then we have an overlap. But there are areas of non-overlap, too. We don't always use computers to look for information, and we don't always look for information on computers. UX is like this, too. UX is not always about computers but can be about user experience generally. I bring this up because if you do become involved with UX work at a library (or elsewhere), then I'd encourage you to refer also to the information seeking and related literature when it's appropriate to do so. Remember, it's all interconnected.

References

Browning, S. (2015). Data, Data, Everywhere, nor Any Time to Think: DIY Analysis of E-Resource Access Problems. Journal of Electronic Resources Librarianship, 27(1), 26–34. https://doi.org/10.1080/1941126X.2015.999521

Dickson-Deane, C., & Chen, H.-L. (Oliver). (2018). Understanding User Experience. In M. Khosrow-Pour (Ed.), http://services.igi-global.com.ezproxy.uky.edu/resolvedoi/resolve.aspx?doi=10.4018/978-1-5225-2255-3.ch661 (Fourth Edition, pp. 7599–7608). IGI Global. http://www.igi.global.com/chapter/understanding-user-experience/184455

Parush, A. (2017). Human-computer interaction. In S. G. Rogelberg (Ed.), The SAGE Encyclopedia of Industrial and Organizational Psychology (2nd edition, pp. 669–674). SAGE Publications, Inc. https://doi.org/10.4135/9781483386874.n229

Pennington, B. (2015). ERM UX: Electronic Resources Management and the User Experience. Serials Review, 41(3), 194–198. https://doi.org/10.1080/00987913.2015.1069527

Pennington, B., Chapman, S., Fry, A., Deschenes, A., & McDonald, C. G. (2016). Strategies to Improve the User Experience. Serials Review, 42(1), 47–58. https://doi.org/10.1080/00987913.2016.1140614

Evaluation and Statistics

We've discussed problems this semester with defining terms, and we have learned that a lot of effort has been expended into standardizing them. We have also seen that the topics that we've covered---technologies, standards, access, usability, workflow, markets, licensing---are linked in some way. All this complexity makes the measurement of usage that much more complicated. The problem is that when electronic resources (or basically all activity on the web and internet) are accessed, a computer server somewhere keeps a log of that interaction. Having logs available makes it seem that we can have accurate data about usage, but it's not a guarantee, and the insight we may be able to glean is always difficult to acquire no matter how much data is available to us.

Example web server access log entry (I've obfuscated the IP address and the website domain):

99.999.99.99 - - [21/Oct/2021:10:54:35 -0400] "GET /favicon.ico HTTP/1.1" 404 517 "https://WEBSITE.edu/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:93.0) Gecko/20100101 Firefox/93.0"

Access log type data can be good data to explore, but we have to be mindful that all data has limitations, and that there are different ways to define what usage means. For example, the log snippet above indicates that I visited that link at that server, but does that mean that I really used that website even though I accessed it? Even if we can claim that I did, what kind of use was it? Can we tell? (We can actually learn quite a lot from web server access logs.)

This week we learn about the efforts that have gone into standardizing usage metrics and usage reports via Project Counter. Also, we see a few examples of how usage data can help inform collection development and benefit the library in other ways. This is an important area of electronic resource librarianship, but it's also an area that may overlap with other parts of librarianship, such as systems librarianship or collection development. Here we might see job titles like library systems administrator.

Project Counter

Project Counter is a Code of Practice that seeks to help provide more informative and consistent reporting of electronic resource usage. From Project Counter:

Since its inception in 2002, COUNTER has been focused on providing a code of practice that helps ensure librarians have access to consistent, comparable, and credible usage reporting for their online scholarly information. COUNTER serves librarians, content providers, and others by facilitating the recording and exchange of online usage statistics. The COUNTER Code of Practice provides guidance on data elements to be measured and definitions of these data elements, as well as guidelines for output report content and formatting and requirements for data processing and auditing. To have their usage statistics and reports designated COUNTER compliant, content providers MUST provide usage statistics that conform to the current Code of Practice.

Thus, these reports were designed to help solve a problem that will likely never completely be solved, but it's still an important and useful effort. The main goal of Counter is to provide usage reports, and the reports, for version 5 of Counter, cover four major areas:

  • Platforms
  • Databases
  • Titles
  • Items

And you can see which reports these four replace in a table in Appendix B of the Code of Practice.

Counter 5 was designed to include better reporting consistency, better clarity of metrics that measure usage activity, better views of the data, and more. In order to clarify the purpose of Counter, let's review the introduction to the Code of Practice, which articulates the purpose, scope, application, and more of Counter.

[ Note: Review the Introduction to the Code of Practice ]

Readings

Pesch (2017) provides a helpful introduction to the history of Project Counter and the migration from Counter version 4 to version 5. Table 1 in Pesch describes the four major reports. Most of the reports are self-explanatory. Database, Title, and Item reports cover what they describe, but Platform reports might be less obvious. These reports include usage metrics at the broadest level and of things like EBSCOhost databases, ProQuest databases, SAGE resources, Web of Science databases, and so on. These reports come into play when users/patrons search in the overall platform but not in any single database provided by the platform. For example, UK Libraries subscribes to the ProQuest Databases and for us, that includes 35 primary databases. Users can search many at the same time or search any single one. The same holds for platforms like EBSCOhost, Web of Science, and others. This is the platform level.

The Scott (2016) article provides a nice use case for how Counter reports can inform collection development. We've talked a bit about the Big Deal packages that more libraries are trying to move away from because such deals often include access to titles that are not used or not relevant to a library community. Here Scott shows that it might be possible to avoid subscribing to some services using this data, but it's also important to closely read through and understand the problems associated with interlibrary loan, the metrics, and other limitations described in the Conclusion section of this article.

We move away from Project Counter with the Stone and Ramsden (2013) article. I wanted to introduce this article because it highlights how metrics can be used to assess the value of a library, which is often underestimated by administration but constantly required in order to garner the resources needed to improve or sustain a library's resources. Here Stone and Ramsden investigate the correlation (not causation) between library usage and student retention. Increasing the latter is the Holy Grail of college and universities. If this were a public library report, it might be interesting to see how well electronic library usage correlates to continued usage and how such a correlation might result in various outcomes defined by the library. One nice thing about the Stone and Ramsden article is that it does not depend on quantitative metrics alone but supports its findings through qualitative research. There's only so much a usage metric can say.

Finally, I would like you to be aware of the code{4}lib journal and this article by Zou is pretty cool. Although this article overlaps with some security issues, a topic that we'll cover in our last week of the semester, the article also provides a way of thinking outside the box about the metrics that you'll have access to as an electronic resource librarian. Here, Zou describes a process of taking EZproxy logs (compare the example entry with the web server entry I included above) and turning them into something useful and dynamic by incorporating some additional technologies. Recall that EZproxy is software that authenticates users and provides access given that authentication. We use EZproxy at UK whenever we access a paywalled journal article. That is, you've noticed the ezproxy.uky.edu string in any URL for a journal that you've accessed via UK Libraries' installation of EZproxy, and the URL https://login.ezproxy.uky.edu/login is the log in URL. Zou specifically references the standard way of analyzing these logs (take a look at the page at that link), which can be insightful and helpful, but Zou's method makes the analysis of these logs more visual and real-time. The main weakness with Zou's method is that it seems to me to be highly dependent on Zou doing the work. If Zou leaves their library, then this customized analysis might not last. Still, it's good to know that if you have an interest in developing skills with systems administration, with various other technologies, and with some basic scripting language, this kind of thing, and more, is possible.

Privacy and Security

Breeding starts off with Chapter 1 from this week's reading of Library Technology Reports with the following statement:

Libraries have a long tradition of taking extraordinary measures to ensure the privacy of those who use their facilities and access their materials.

This is mostly true but not entirely so. When I was an undergraduate, I remember going to the library to look for books on a sensitive topic. I saw a book on the shelves that looked relevant, and when I pulled it off the shelf and opened it, I noticed that a friend of mine had checked the book out before me because their name was written on the due date card in their handwriting. Even though I had grown up with these due date cards in library books, it had never occurred to me before then how these cards could pose a problem with privacy. At the time, I decided not to check out that book because of that issue.

We might be comforted in thinking that the kind of information that was supposedly revealed to me in the book that my friend had checked out, and that I had opened, is the kind of information that would not scale up easily. It was a serendipitous event that involved me looking for a book on the same topic and then just happening to pick the one book that my friend had used. It's not likely, then, that this might pose a big problem at scale.

However, let's think of that information in that due date card as metadata, and then ask, how could we use it? The sociologist Kieren Healy did that kind of thing with membership lists from colonial times. He showed that using limited data like the one I found in that book, some important things could be discovered. For example, Healy imagined that if the British had access to simple social network analysis tools back in 1772, they could have identified that Paul Revere was a patriot and then have used that information to prevent or interfere with the American Revolution. I encourage you to read his blog entry and his follow-up reflection because it is a neat what-if hypothetical case study.

Most libraries in North America have replaced due date slips with bar codes, and while this has removed the problem above, the overall migration from paper-based workflows to electronic ones have raised other problems. Not long after the Patriot Act was passed after 9/11, FBI agents ordered a Connecticut librarian to "identify patrons who had used library computers online at a specific time one year earlier". Per the law, the librarians involved were placed under a gag order, which prevented them from speaking out. This led to a lawsuit against the then US Attorney General. Eventually the librarians were released from their gag order and allowed to discuss the event.

There are occasionally big, dramatic cases like the one described above, but privacy and security issues are often much more mundane but still quite important. Since many users of libraries of all types visit library homepages, then encrypting all the web/internet traffic is important. A year and a half ago, the major web browsers announced that they would no longer support Transport Layer Security (TLS) protocol versions 1.1 or earlier, and that any site that had not yet migrated to TLS version 1.2 or above will not be accessible. TLS is used to encrypt web traffic. This news came out in early March 2020, and the browsers postponed blocking these sites because of the pandemic. But the migration went into affect a few months later, and as you can see in the screenshot below, for a while I had to enable an insecure connection to libraries.uky.edu in Firefox if I wanted to visit it. When I enabled it, then my activity on libraries.uky.edu was potentially visible to others under certain conditions. Note, however, that once I sign into my library account, I'm transferred to Primo's cloud service, and then to UK's OAuth page to log in. These parts of the encryption chain are fairly strong. So it's only activity specifically on libraries.uky.edu that was not good at the time.

UK Libraries TLS Version Block in Firefox

In Chapter 1 of this week's reading, Breeding does a nice job introducing a variety of technologies and policies that are related to security and privacy. There are important technological things to consider, like web traffic encryption. There are also important policy considerations, too, like how third party vendors implement privacy and security mechanisms (like the Primo example above). Note that the SERU recommended practice has a section dedicated to Confidentiality and Privacy. In case you work at a library that does not use SERU, this is how SERU can be useful to you. It can inform us of the kinds of provisions that a library ought to have in a license if the default provisions a vendor proposes do not include the necessary components.

In Chapter 2, Breeding reports on the state of privacy and security protections among a selection of ERM vendors. I should note that although this article is only four years old, this area moves fast and it's likely that the problems that Breeding identifies with some implementations have been fixed in recent years. As I noted, since browsers like Chrome, Firefox, Edge, and Safari have grown stricter about enforcing encryption, web services have complied by adhering to better security standards. Chapter 3 covers the same issue from the library's perspectives. Here is where improvements are much less likely to be seen. For example, a number of library sites use Google Analytics to track site usage and other metrics, but this means that our actions, albeit somewhat anonymized, on these library sites are being collected and stored by Google (or some other analytic service). Also, many websites, library websites included, use fonts that are hosted on other servers, but doing so means that these can be tracked, too (see my the access log entry above as a basic example). There's a trade-off, though. If we want to learn more about how users interact with the site in order to improve usability and accessibility, then we have to have some of this data. Here's where a privacy policy might come into play.

Thus, I point to a controversial article that was published on the code4lib journal last year. I'd highly encourage you to peruse the article as well as the comments at the bottom of the page:

(Demo library connection in Firefox web developer settings)

Please read the articles closely, and let's have an open discussion.