As I've stated in previous lecture, Access is the paramount principle of librarianship, and all other issues, from censorship to information retrieval or to usability, within LIS, are on some level derived from or framed by that principle of Access.
This week, then, we devote ourselves to a discussion of electronic access, and it is helpful to begin with Samples and Healy (2014), who provide a nice framework for thinking about managing electronic access. They include two broad categories, proactive troubleshooting and reactive troubleshooting of access.
- proactive troubleshooting of access: "defined as troubleshooting access problems before they are identified by a patron". Some examples include:
- "letting public-facing library staff know about planned database downtime"
- "doing a complete inventory to make sure that every database paid for is in fact 'turned on'
- reactive troubleshoot of access: "defined as troubleshooting access issues as problems are identified and reported by a patron". Some examples include:
- "fixing broken links"
- "fixing incorrect coverage date ranges in the catalog"
- "patron education about accessing full text"
The goal here, as suggested by Samples and Healy (2014), is to maximize proactive troubleshooting and to minimize reactive troubleshooting. Or, to prevent problems from happening. Samples and Healy's report is a great example of why we call the field library science. The purpose of making librarianship a science was based on a need to rigorize librarianship, to systematically study the problems of libraries, and to generalize solutions based on systematic study. The project they describe does that nicely. The authors identified a problem that had grown "organically," collected and analyzed data, and then generalized from it by outlining a "detailed workflow" to "improve the timeliness and accuracy of electronic resource work." Practically, studies like this promise to improve productivity and better work flows that foster job and patron satisfaction. It could also help librarians identify the kinds of software solutions that will align with their workflows and patron information behaviors. If interested, you might want to read a recent article in the Journal of Electronic Resource Librarianship about the impact of Covid-19 on electronic resource management. Six authors individually describe access issues at their respective institutions and show how issues of pricing, acquisitions, training, user expectations, and budgets affect electronic access. It is good to read articles like this in light of the framework provided by Samples and Healy (2014) because stories like these, about this impact of the pandemic on electronic access, could help guide us in developing proactive troubleshooting procedures that will minimize issues in the future, pandemic or not, at our own institutions.
Lowe, R. A., Chirombo, F., Coogan, J. F., Dodd, A., Hutchinson, C., & Nagata, J. (2021). Electronic Resources Management in the Time of COVID-19: Challenges and Opportunities Experienced by Six Academic Libraries. Journal of Electronic Resources Librarianship, 33(3), 215–223. https://doi.org/10.1080/1941126X.2021.1949162
Back to Samples and Healy (2014), who say something important against a common assumption about electronic resources, particularly those provided by vendors:
The impression that once a resource is acquired, it is then just 'accessible' belies the actual, shifting nature of electronic resources, where continual changes in URLs, domain names, or incompatible metadata causes articles and ebooks to be available one day, but not the next.
Hence, unlike a printed work from the long ago print-only era that, once cataloged, may be shelved for decades or longer without major problems of access, electronic resources require constant and active attention to maintain accessibility to them. Ebooks, for example, can create metadata problems. Often what's important about scholarly ebooks, in particular, are the chapters they include, and hence metadata describing ebook components is important, along with providing links to those chapters in discovery systems. This difference between item-level cataloging and title-level cataloging, as Samples and Healy describe, can lead to confusing and problematic results when considering different genres and what those genres contain. Or, note that they discuss how a series of links are involved starting from the source of discovery, e.g., an OPAC, to the retrieved item, and how difficult it might be in determining which of these links and which of those services is broken when access becomes problematic.
Let me highlight a few key findings from their report:
- Workflows: why does this keep coming up? It's because workflows help automate a process---simplify and smooth out what needs to be done, and because this is only possible when things are standardized.
- Staffing: we'll discuss this more in a future forum, but part of the problem here is that ERM has had a major impact on organizational structure, but one where different libraries have responded differently. This lack of organizational standardization has its benefits regarding overall management practices and cultures, but it also has huge drawbacks---and that's the difficulty in establishing effective, generalized workflows that include key participants, and to minimize as many dependencies on any one person.
- Tracking: if there's no tracking, there's no method to systematically identify patterns in problems. And if that's not possible, then there's no method to solve those problems proactively. It's just all reactive, and reactive troubleshooting, as Samples and Healy indicate, results in poor patron experiences. We'll discuss tracking when we during the week on Evaluation and Statistics.
I like the Carter and Traill (2017) article because we commonly get the line that discovery systems are a great solution to all the disparate discipline based resources that librarians subscribe to in order to search their collections. Or, if we do think about problems with such systems, we are often presented with a basic information retrieval problem, such that the larger the collection to search, the more likely a relevant item will get lost in the mix. But as Carter and Traill (2017) point out, these discovery systems also tend to reveal access problems as they are used. The authors provide a checklist, as a result of their analysis, to track issues and improve existing workflows.
The Buhler and Cataldo (2016) article provides an important reminder that the mission of the electronic resource librarian is to serve the patron. This should remind us that the internet and the web have flattened genres, and have made it difficult to distinguish among things like magazine articles, news articles, journal articles, encyclopedia articles, ebooks, etc. Though the Buhler and Cataldo (2016) reading is student-focused, other studies have hinted at the same issue they describe across other populations. It's important, if possible, to recognize these issues as ERM librarians and work to resolve them in the ways that you would be able to.
I grew up with some understanding of the differences between an encyclopedia article, a journal article, a magazine article, a newspaper article, a chapter in a book, a handbook, an index, a dictionary, etc. because I grew up with the printed versions and because these were tangible things that looked different. Today, a traditional first year college student was born around the year 2003 and grew up reading around the turn of the first decade. The problem this raises is that although electronic resources are electronic or digital, they are still based on genres that originated in the print age, yet they lack the physical characteristics that help distinguish one from the other. E.g., what's the difference between a longer NY Times article (traditionally a newspaper article) and an article in the New Yorker (traditionally a magazine article) today? Aside from some aesthetic differences between the two, they are both web pages, and it's not altogether obvious, based on any kind of cursory examination, that we can tell, as regular users, that they're entirely different genres. However, there are important informational differences between the two, how they were written, how they were edited, who they were written by that might still lead us to consider them as different genres. Even Wikipedia articles pose this problem. Citing an encyclopedia article was never an accepted practice, but this was only true for general encyclopedias. It was generally okay to cite articles from special encyclopedias because they focused on limited subject matters like specific areas of art, music, science, culture, and were usually more in-depth in their coverage. Examples include the Encyclopedia of GIS, the Encyclopedia of Evolution, The Kentucky African American Encyclopedia, The Encyclopedia of Virtual Art Carving Toraja--Indonesia, and so forth. There are some studies that show that Wikipedia does provide that kind of in-depth coverage of some subject matters, thus helping to flatten the encyclopedia genre, too.
The flattening holds true for things like Google. The best print analogy for Google is that of an index, which was used to locate keywords that would point to source material. See the links in the discussion post for examples. The main difference between these indexes and Google is that the indexes were produced to cover specific publications, like a newspaper, or specific areas, like the Social Science Citation Index or the Science Citation Index, both of which are actual, documented, historical precursors to Google and to Google Scholar. But, today, these search engines are erroneously considered source material (e.g, "I found it on Google"). Few, I think, would have considered a print index as source material, but rather as a reference item, since it referred users to sources. Now it's all mixed up, but who can blame anyone.
Let's discuss all of this on the board.