Introduction

Like other academic libraries, Nottingham Trent University (NTU) Libraries and Learning Resources (LLR) has been using COUNTER-compliant usage statistics for many years to help inform and underpin decision making. They have been the core metrics driving electronic serials and database renewal decisions; have provided evidence to support cancellation decisions; have triggered acquisitions when presented in the form of turnaway and access denial statistics, and are an everyday part of managing electronic resources. As library collections become increasingly electronic, and their usage becomes less visible and more anonymous, usage statistics are often the only indication available to libraries to demonstrate how well they are fulfilling the needs of their customers.

LLR has had recent experiences with multiple concurrent evidence-based acquisitions (EBA) plans. Under the EBA model, institutions make a prepayment to a publisher and, in return, are given access to an extensive online collection of content, with unlimited access for a set period. Unlike patron-driven acquisition plans, where acquisition is automatically triggered when usage reaches a certain level, with EBA, the institution makes decisions at the end of the plan on which titles it wishes to license in perpetuity using the prepaid funds. The evidence to support this decision making is usually supplied in the form of usage reports, indicating which titles have been the most used within the institution during the trial period. These plans proved very popular with NTU customers but created significant challenges for LLR in analysing and understanding the data from different publisher plans. They led to increased interest in other metrics which might provide greater insight into how much customers were engaging with the content provided by the library.

Background

In 2015 LLR launched a service called Your Books More Books (YBMB), whereby the Library undertook to supply final-year undergraduates and postgraduates within three days of their asking with any items that were not already in stock. The YBMB service had two strands: one strand offered an expedited inter-library loan (ILL) service, satisfying requests for items not held in the library by using the quickest and easiest supply route, and the second strand saw the launch of five EBA plans aimed at increasing the number of e-books available to users in the hope this would satisfy some of their information needs and help to prevent the ILL service from being overstretched.

The expedited ILL service included next-day British Library or Amazon Prime delivery and short-term e-book rentals. At the end of the project, the data were analysed and it was found that a significant number of the short-term e-book rental links that had been sent to users had been used for less than five minutes, and had therefore incurred no charge. LLR questioned whether this was a successful outcome, since although it could be argued that this free usage was a good outcome, what might the verdict be of the customers who had hardly used the links they had been sent? Were they:

  • unhappy that the book was not what they needed?
  • neutral about the experience, because they could tick the book off the list of titles that they needed to look at?
  • thrilled because they quickly found the quote or references they needed in no time at all?

It was impossible to tell.

The five EBA plans enabled LLR to make an additional 60,000 e-books available, and, at the end of each EBA plan, the usage statistics showed that each one had been used differently, with the initially deposited funds performing in widely different ways. For one, the prepaid funds could purchase all items used nine times or more; for another plan, the funds could purchase items used 25 times or more; the third one, seven uses or more, and so on. These variations were almost certainly influenced by the fact that the plans varied in both the amount of content they contained and the size of the prepayment funds required, but it was impossible to establish whether these inconsistencies were a cause for concern and there was no clear idea of what constitutes ‘good’ usage. LLR was concerned that there might be some backlash from customers who had used content which was subsequently not selected for perpetual access, especially in the plans where the initial deposit was only sufficient to purchase items used 25 times or more. The final decisions regarding what content to retain and purchase were largely based on usage and affordability, but, for some of the plans, there were lingering doubts regarding the decisions made and whether the correct items had been purchased.

These experiences were the impetus for thinking about what sort of metrics and data would be helpful when trying to make decisions about how well resources are being used and how satisfied our users might be. So,

  • would it be helpful to understand more about how long a customer used an item for, rather than how many times an item had been viewed?
  • are there some types of measurable usage which would be meaningful and provide us with better evidence and data on which to base our decision making and service delivery?

It was at this point that Alexander Street approached LLR with an offer to co-operate on developing a suite of user engagement analytics.

Vendor perspective

Like NTU, many libraries are eager to move beyond cost-per-use analysis to understand how a resource was used and to what end. Answering these questions requires vendors to be more transparent about usage of electronic resources. Project COUNTER is essential in providing standard definitions of use and points of comparison between vendors. It shows libraries frequency of use across collections and provides standard rules that Alexander Street and other vendors must follow. It provides consistency in reporting, but it does not reveal which individual titles or subjects are used or for what purpose. To support demand-driven models, vendors have been supplying item-level usage to show which titles, publishers and subjects are being used. But even those data are not revealing what the user does with the content beyond a view. For example, was the search a new discovery or a dead end? Was the video shown in a lecture hall of more than 100 students or viewed via mobile during a commute? Was the title previewed or viewed in its entirety? In 2017 Alexander Street made these and other data available via a User Engagement portal (see Figure 1), and piloted the data with libraries like NTU to determine which data were most valuable.

Figure 1 

Alexander Street summary dashboard portal screen showing user engagement

Alexander Street’s portal shares over 50 categories of data on use. In 2017 it covers the use of streaming video and will expand to report on streaming audio and text-based formats in the future. The portal shares:

  • Playbacks: number of times a video was played by video title, subject, collection, publisher, including average duration a video is played.
  • Preview playbacks: enable all videos to be discoverable via search engines and provide a 30-second preview if the library does not have access. Campus use of these previews by title, subject, collection and publisher is shared with the library to assist in collection development.
  • Patron Reports: information about the user population, including mobile access, operating systems, busiest times of day, referring URLs, etc.
  • Accessibility: enables synchronous scrolling transcripts, subtitles, and on-screen transcription of most video titles. This report shows how many times these features were used.
  • Engagement: measures any additional interaction with a title other than just a playback. For example, Alexander Street tracks if the title was annotated, saved to a playlist, embedded, shared, cited, etc. It also captures curated views, which are views from a link that was shared by others via an LMS embed, social media post, or other citation (see Figure 2).
  • Impact: this report captures a deeper story about use. The portal collects and reports how the video was used (assignment, shown in class, for entertainment, for research, etc.) and what rating the user gave the video. In the future, it will cover metrics demonstrating learning outcomes. Alexander Street is partnering now with faculty and libraries to measure and report on video interaction and quizzing. Aggregate results from these interactions would be shared with the Library, and individual student responses shared with faculty via the LMS (see Figure 3).
Figure 2 

Alexander Street title ‘Engagement’ screen

Figure 3 

Alexander Street ‘Impact Reports’ screen

Vendor challenges and opportunities

With no agreed definitions or standards in the area of engagement analytics, each vendor must make individual decisions on how to calculate these data. If publishers are transparent on these decisions, librarians can draw informed conclusions. Yet understanding the differences is time-consuming for the library. Vendors must balance the need to share new data with the need to provide a consistent and helpful customer experience. For example:

  • Alexander Street reports whether use was on or off campus. But the pilot revealed that the assumptions used for on or off campus did not work for all campuses. NTU, for example, has on-campus users with a separate IP range specified by the companies that manage the Halls of Residence. This use reflected as ‘off campus’. Alexander Street has revised the approach to report on actual location and enable the library to interpret on or off campus.
  • Alexander Street’s operas and other performances are indexed at the movement or scene level. When a user played back three movements, they had to decide if that counted as one or three playbacks of the opera. The decision was taken to count the parent-level playback only – so three movements equal one playback of the opera. But this decision could vary between publishers.
  • Analysing user engagement requires that the product platform provides ways to engage the user. Variations in these features from platform to platform limit what is knowable or comparable about engagement. For example, on Alexander Street’s platform users can isolate sections of a video and make a clip, which can then be shared with other users. They track how many times clips are both created and viewed. This extends to playlists. Across content types and vendors, these experiences will differ widely.

There are many other examples that point to the need for evolving standards. Those libraries and vendors who can take the time to pilot new approaches will help shape the metrics of the future. Already we can see initiatives in higher education, for example IMS’s Caliper Learning Analytics Framework, beginning to define how to measure user engagement and learning impact.

With the challenges, there are also opportunities. The area of the Alexander Street Portal showing referring data is perhaps most revealing about the material that is impacting the user community. Alexander Street reports the different sources of traffic to its material and shares a deep dive in a special category called ‘Curated Views’. Curated Views are a count by title of views received via user promotion: a Tweet, Facebook post, LibGuide embed, LMS embed, Wikipedia citation, etc. It indicates an investment on behalf of the person posting the link or embed, showing a deeper level of engagement. It counts the users returning to the site, and it measures how successful these posts are at generating views. It demonstrates that the user community found value in the video and participated in its discovery. Sharing this data opens up avenues to learn more by comparing use from library referrals vs. organic search. Do users coming from the Library have a richer experience – spend longer on each video, engage more with each title? By connecting engagement metrics with referring data, libraries can see if efforts in discovery are leading to a deeper level of use.

Other opportunities are in exposing these data to the end-user community directly. How might engagement increase by revealing the engagement of others? Vendors might share lists of highest rated videos or titles with the most overall engagement. Students can browse videos by most studied on campus or most shown in classes worldwide. A heat map on each video timeline could expose which sections of video are most popular or annotated for educational use.

The most enduring possibilities will result from true collaborations between publishers, libraries and faculty. Publishers know what can be reported and librarians and faculty know what should be considered in assessing value. Defining ‘good use’ is the true opportunity.

Initial findings

LLR now has access to over six months’ worth of engagement analytics, and has found them both fascinating and useful in the decision making processes that arise at the end of an EBA plan.

Unsurprisingly, both the quality and range of data on offer from the user engagement analytics are much richer than we find in the COUNTER usage reports. As previously mentioned, LLR has made widespread use of cost-per-use metrics as core data to support collection decision making, but experience has shown that this metric does not work well with EBA packages of books or video content. For EBA plans, libraries need to make decisions at the title or item level, and are seeking evidence and confidence that they are making the correct decisions about which content to retain and which to discard.

LLR, like most libraries running EBA plans, naturally leans towards acquiring content which has been most used, but was keen to establish if it was possible to see other data and metrics which might be better indicators of how users had engaged with the content.

Having just concluded an EBA plan with Alexander Street, LLR now has experience of using these engagement metrics in practice, and some of its decisions have definitely been influenced as a result. The majority of the content chosen for retention at the end of the plan was still the content that had received the highest use, but there were instances of content being selected where a greater percentage had been watched, in preference to other resources which had received more views. Figure 4 contains conventional usage data showing the most used items in descending order, but it also shows on average how much of the content was played. It is interesting to see here that the item with the largest average percentage played is not the most viewed title.

Figure 4 

Alexander Street ‘Usage by Titles’ screen

These analytics data proved to be very helpful when fine tuning our final selections. Experience has shown that it is rare that the prepaid EBA funds run out neatly at a point where all content with the same amount of usage can be selected for retention. This means that libraries will usually be required to choose between different titles which have all received the same amount of usage in COUNTER metric terms.

At the end of the Alexander Street EBA plan, LLR fortunately had additional metrics on which to draw, which helped with these difficult decisions. In particular, there were some resources which had been viewed the same number of times, but the average percentage played differed from title to title, and this proved to be the deciding factor. Other titles showed evidence of being embedded or curated in Reading List software or on the VLE; other content had been cited, or the customer had chosen to watch the video with the accompanying transcript. In each case, this evidence was pivotal in the decision to include these resources in the final selection.

Insights into user behaviours

Overall, LLR’s experience of using the engagement analytics reporting has confirmed its belief that it adds real value and insight, which can help inform decision making and give a greater insight into user behaviours.

The act of citing an item suggests both importance and value to the user. Someone liking content so much that they shared it; added it to a playlist; watched it on their mobile; clipped bits of content or added to a resource list or embedded it in some learning content; e-mailed; saved or printed it, all suggest some level of engagement and interest in the content that has been made available. LLR feels that all of these types of metrics have the potential to give useful insight and understanding beyond influencing decisions affecting acquisitions.

For instance, it is almost a received wisdom that librarians assume that most library users start their searching with Google, and there have been questions on whether it is worth loading MARC records, or whether libraries should just switch content on and off in their link resolver. It was only a few years ago that there was a session at a UKSG conference asking whether libraries still need an OPAC and catalogue, or whether they should rely on Google. The Alexander Street analytics on top referring URL data (Figure 5) clearly show that the NTU discovery system, Ex Libris Primo, is the primary route to accessing Alexander Street content, and LLR feels these data firmly back up the practice of loading MARC records as the key to driving both discovery and usage.

Figure 5 

Alexander Street analytics on top referring URLs (October 2017)

In this same set of results (Figure 5), we can also see that some referrals came from NTU’s Resource List system, which is important information to consider during the end of plan acquisitions decision making, to ensure that these titles are retained. However, they also suggest that our academics value the content and wish to use it in their teaching.

The Curated Views analytics show content that LLR’s users have found in the VLE, via social media or embeds. These curated data give insight into behaviour that promotes awareness of Alexander Street content, and are calculated based on referrer data that show what kind of link a user followed and from where. They give evidence of the effort that has gone into drawing users’ attention to Alexander Street content, and this is strongly suggestive of the value placed on the content.

Value as a marketing tool

Marketing and promotion of resources are becoming more important to libraries, and the granularity of the Alexander Street data gives Libraries the chance to try out different marketing tools and routes, and to watch for the impact in the user engagement analytics to see what evidence this may provide about the most monitored or effective communication routes. This is an area of work that LLR wishes to progress in the coming months, once we have identified robust sampling techniques. Gaining some insight into whether Facebook promotion appears to be more effective than Twitter, or whether word of mouth promotion by NTU’s Library liaison teams is even more effective, would be very useful in informing future marketing campaigns and priorities.

Where next?

The additional insights gained from the user engagement analytics have already proved valuable, and LLR would like to make greater use of the data in future.

LLR does, however, recognize that whilst the temptation to try to find out more about what our customers like and value is almost overwhelming, there could be a possible tension between this desire and the seamless and non-intrusive access to content.

The interactive nature of Alexander Street’s platform, where users can rate videos or provide information about how content has been used, also raises questions regarding whether libraries can trust users to be honest in their responses, e.g. if an opera has been watched at an institution which has no music or creative arts courses, might this use have been for recreational or entertainment purposes, and would the user admit this, or would they select a more academic reason?

LLR also believes that more work is needed on understanding how to interpret ‘low percentage played’ data. Libraries don’t expect users to start reading all e-books at page one and continue to the end, so what would be the engagement markers of good targeted usage of resources, as opposed to random dipping in and out?

LLR would also like to explore the idea of engagement analytics for text-based materials, which make up the majority of its collection content. It is understood that not all e-book and journal platforms have the same level of interactivity as video-based platforms, but LLR would like to see more traditional forms of publishing starting to include data beyond simple usage.

The fact that most e-book and journal platforms can detect how much content has been printed or copied suggests that reporting on how much content has been used should be possible. If text-based publishers could also report on the top referring URL data, it would help to verify LLR’s tentative conclusions about the importance of MARC records in the discovery and subsequent use of text-based content.

Conclusions

The additional insight from user engagement analytics has already proved invaluable in EBA decision making, and LLR would like to make greater use of this in future, across a wider range of content types.

User engagement analytics need to be seen as an addition to COUNTER usage reports and not as a replacement. For such analytics to be truly useful and respected, more work is also needed on developing standards for this type of reporting, so that consistency in reporting can be reached.

Finally, it can be argued that more information and data may not always be a good thing, and libraries could run the risk of information overload hampering the very decision making processes that they are trying to improve. Finding the happy medium between having enough data and being swamped will be something that can only be reached through experience, but the prospect of an improved understanding of how library users are engaging with resources is surely a goal worth pursuing.