In September 2010 the University of Liverpool’s Library Service implemented its version of EBSCO’s resource discovery platform (EDS), branded locally as Discover, alongside the Library catalogue. After its introduction the statistics showed year-on-year increases in ‘use’. In the 2013–14 academic year alone both the average number of sessions and full-text downloads per full-time equivalent (FTE) had increased by 16% on the previous 12 months, whilst the number of searches had increased by 29%. However, although inferences can be made, the figures alone can only tell us so much. What they did not tell us was exactly how or to what extent our users were engaging with Discover. Nor did they tell us how easy they actually found it to use the platform, how efficient and effective they were in locating and accessing the content sought or what features and functionality they found particularly useful. Were the figures growing because Discover was meeting their needs and increasingly becoming a ‘go-to’ resource in preference to the other options available? Or was the opposite in fact true? Did they find it difficult to use and/or were not utilising it to its full potential, having to perform multiple searches to locate the resources required?

Indeed, according to Betz, one of the biggest problems facing libraries is ‘discovery and access’ and how large our collections are ‘is really irrelevant if your students and faculty can’t find it, or … find it impossible to use’. Thus, in early 2015, a cross-library project group was assembled to embark upon a large-scale usability study. The purpose of the study was to gain a better understanding of user engagement. The aim was to find out exactly what users liked and disliked about the platform and to assess the extent to which it was meeting their information needs. In doing so, we would be able to make informed, evidence-based changes to the interface, improving its overall usability and making it a more intuitive, effective and efficient resource. This in turn would ultimately result in an improved Library service and enhanced student experience in line with a number of objectives identified in the Library’s Strategic Plan.

Methodology

Building upon similar, earlier studies undertaken at the University of Huddersfield in 2010 and Manchester Metropolitan University in 2014, the study would employ a three-part approach consisting of an initial survey followed by usability test sessions and focus group discussions.

Survey

The survey itself was created using QuickTap software and the responses collected using iPads by members of staff roaming the social areas of the two main Library sites. Concerned that this alone would not capture a representative sample of those patrons who rarely used the physical library, a separate but identical online version of the survey was also created. Wary of survey fatigue, given the ongoing National Student Survey, there was a reluctance to distribute this wholesale via staff and student mailing lists. Instead, the online version underwent a soft launch, advertised and promoted purely through the Library’s news blog and its popular social media accounts.

The survey questions themselves were concerned with:

  • general information-seeking behaviour
  • existing use of Discover
  • evaluation of Discover
  • use of refiners/limiters
  • future use of Discover
  • free text comments.

In total, 705 responses were collected (671 on site and 34 online), working out at approximately one survey being completed for every three minutes we were actively collecting responses. The sample population itself appeared to be a relatively healthy representation of the University’s wider student demographic. An Excel-based dashboard was created to interrogate the survey responses, enabling the data and any particular dimension of that data to be broken down or limited to respondents of a particular subject area, student type, year of study, or any other section of the sample population as defined by their response to a particular question or questions.

Usability test sessions

The usability test sessions consisted of three semi-structured search tasks involving both topic-based and known-item searching of journal titles, journal articles and both print and electronic book titles. Each task would last approximately ten minutes with the first two tasks using the existing, live version of Discover. For the third and final task, the participants were asked to repeat aspects of the first two tasks using an alternate, test version of the interface. Five test sessions were scheduled involving 20 participants in total.

Whilst performing the tasks, each participant was asked to ‘think aloud’, providing a running commentary on what they were doing or trying to do: what they were looking at or for, why they were using a particular feature or function, and what they found confusing, frustrating or not behaving in the way they expected, and so on. Headsets with microphones were provided and recording software used to capture both the on-screen activity and the accompanying audio narrative. The participants were asked to carry out each task in a manner as naturalistic and representative of their usual information-seeking behaviour as possible. To help promote and preserve this realism, beyond framing the sessions and being on hand in case any problems arose, the facilitators did not observe or interact with the participants as they went about each task. Instead, members of the project team would later analyse the recordings using an observation check-list. This allowed us to record systematically the occurrence of particular, pre-defined search behaviours and techniques, and the use of specific facets and functionality along with anything else deemed of interest during each task.

Focus group discussions

A focus group discussion immediately followed each test session. Here, participants were asked to reflect on their typical information-seeking behaviour and their experiences of using Discover both during the test sessions and in a wider, everyday context. Again, the audio from these discussions was recorded, transcribed and analysed.

Findings

What then did we actually discover?

When asked where they typically sought information for their assignments Google, unsurprisingly, was the most popular choice with over 70% of respondents advising they used it as part of their research strategy. In contrast, just 44% of respondents cited Discover as their go-to resource. However, in spite of this 85% of the survey population advised that they did use Discover to some degree. What was also particularly encouraging here was that of the respondents who advised that they never used Discover, very few (just 2% of the total survey population) appeared to have made an informed decision not to do so, with the remaining 13% advising that they either did not know what Discover was, or did not know where to access it or how to use it.

When asked their reasons for using Discover, the most popular reason was to research a topic or subject matter. When it came to the use of Discover itself, most of the survey data reflected positive user engagement, with most respondents agreeing that it was easy to use (88%) and they were able to find the information sought (82%). The use of refining and limiting facets was also endemic, with 96% advising that they refined their searches to some degree. Last but not least, just 7% of those surveyed advised that they would be unlikely to use Discover in the future, so on the back of the survey alone it seemed that we were already making progress!

The positivity continued into the free text comments, with just over half of the 203 received being affirmative in nature. The majority related to the platform in general rather than any specific aspect of it. What was also interesting was that of the total number of negative comments received, some 25% referenced a lack of awareness rather than any particular grievance with the platform itself, reflecting the relatively high percentage of ‘uninformed’ non-users highlighted earlier.

In the test sessions themselves, very few participants were observed using the advanced search option, somewhat surprising given its popularity amongst the survey respondents. There was also relatively little use made of more sophisticated search techniques such as phrase-based searching and/or the use of Boolean operators. Rather, it seemed that for the most part the participants simply appeared to be transferring the same search behaviour they would use with Google and other open web search engines.

Although the use of limiting facets was widespread (‘Source type’ being the most popular), this was not to quite the same extent as expected given the survey responses, with almost a third of test participants appearing to rely exclusively on keyword modifications or reverting to full-sentence-based searching instead.

Although there was evidence of some participants scrolling down the initial results page to varying degrees, there was just a single occurrence recorded of a participant going beyond the first page of results.

‘Did you mean?’, ‘Autocomplete’ and ‘Research Starter’, features designed to mirror valued aspects of other open web resources, all elicited positive responses. The latter in particular was quickly recognized as a more reliable, quality assured and guilt-free alternative to Wikipedia.

The use of the detailed record emerged unexpectedly as a key feature for most of the participants. Here it was systematically being used as the preferred means of evaluating and accessing found content even though the same information and options were also available from the initial results page. In comparison, just a single participant was observed making use of the preview option. This extensive use of the detailed record was particularly surprising given that doing so required an additional ‘click’, and somewhat contrary perhaps to the ‘concept of “satisficing”’ where a user simply ‘wants to do “the minimum requirements necessary to achieve a particular goal”’?

There was little evidence that any of the additional features, tools or customization options available within Discover were well used or even fully understood. There were also many examples manifest in the test sessions of an incomplete awareness and confusion over the scope, purpose and functionality of not just Discover but also other components of the Library’s resource discovery options. For others it was the actual branding of Discover itself that appeared to have passed them by, despite its prominence on the Library website. This challenged absolutely an assumption we had taken into the project that most students knew exactly what Discover was. In fact, they appeared to have been influenced more by their tutor’s recommendations of where to look than our positioning of Discover front and centre on the Library homepage.

Pain points and barriers

The test sessions also revealed a number of usability issues, some of which were specific, isolated incidents whereas others were more common. The advanced search option was deemed to be too complicated by nearly all of the participants who sought to use it during the sessions (and a number of comments from the survey also cited this). ‘Research Starter’ was sometimes slow to load, often appearing several seconds after the initial results set, which caused some participants to miss it altogether.

However, by far the most problematic area centred on the actual accessing of online content, with the majority of participants experiencing varying degrees of difficulty when trying to access the full-text of found resources. In some cases this was simply due to inconsistencies in the actual positioning of the links to the various full-text options being presented, particularly within the detailed record. In others, it was the multiplicity of links being presented which appeared to cause confusion, especially in the absence of a direct PDF or HTML full-text ‘smart’ link. (See Figure 1.)

Figure 1 

Example of the multiple full-text linking options often presented

Likewise, many participants also appeared to suffer similar problems when navigating either the interim link resolver pages and/or the subsequent native landing sites. The appearance of item-level status boxes for online resources also appeared to add to the confusion, with almost a third of test participants erroneously attempting at some point to use either the ‘Class No’ or ‘Location’ field links to access full-text.

Alternative interface

In the final task of the test sessions the participants were asked to use an alternative version of the Discover interface. The intention here was to strip down, simplify and declutter. In particular, we wanted to provide a much more simplified way of accessing the full-text. The idea was to provide a single, standardized ‘Get Full-Text’ button which would appear in place of the array of smart and custom links and icons that currently featured in the existing version. The results set was also decluttered in other ways, such as hiding the item-level status boxes for electronic resources. (See Figure 2.)

Figure 2 

Example of the single standardized, ‘Get Full Text’ button presented on the alternative, test version

Feedback was predominantly positive, with the vast majority declaring the new look a definite improvement. Likewise, given the evident barriers and pain points caused by the multiplicity of linking options, the single ‘Get Full-Text’ option was also extremely well received. However, there were some reservations, chiefly that we were removing the option of being able to choose which database or platform to access the content on.

Lessons learnt

Primarily, we wanted to incorporate into a fresh interface all that we had learned about users’ behaviour including their preferences and pain points, along with the feedback received from the alternative version. To that end the following actions were recommended:

  • third column on the results page to be collapsed by default, with QR codes (two-dimensional bar-codes), NewsWires and ‘More Books’ options sited here to be removed
  • first line of abstracts to be removed from the results page
  • the number of links in the top header bar to be reduced
  • banner to appear at the top of the results set to highlight the use of limiters if too many/too few results returned
  • plainer, less technical language to be used for refining facets
  • QR codes also to be removed from the detailed record
  • item-level status boxes of electronic resources (pulled through from the Library catalogue) to be hidden
  • two full-text linking options only to be presented – a single, standardized ‘Get Full-Text’ button alongside a rebranded link resolver option to allow choice of content provider.

Figures 3 and 4 illustrate both the pre- and post-project versions of the interface.

Figure 3 

Discover pre-project interface

Figure 4 

Discover post-project interface

Throughout the study other lessons were also learnt about the actual process itself, many of which have since been taken forward and applied to other user experience exercises within the Library.

It was felt that the informal approach taken to the collection of survey responses helped facilitate user engagement and thus boost the response rate. Here, following the initial approach, those who agreed to take part in the survey were simply handed the iPad and asked to complete the survey themselves with the member of staff remaining on hand in case of any problems.

Post-survey briefing sessions were scheduled with the staff members involved. These were seen not only as an opportunity to feed the results of the survey back but also as a forum for identifying problems encountered and determining future best practice. For example, targeting small groups of students proved to be a particularly effective strategy – if one agreed to take part, there was a good chance they all would.

Recruiting participants for the test and focus group sessions, however, was (and has continued to be) a struggle despite various approaches and the now ubiquitous carrot of financial reward. Here, requests for expressions of interest were built into the survey, the Library’s social media accounts were mobilized and approaches made to the (newly formed) Student Library Partnership group. In desperation, we resorted to directly approaching users on site – all rather ad hoc and not proving the ideal of a stratified random sample, but it was simply a case of being beggars not choosers.

Research postgraduate students with prior experience of leading focus groups were used to facilitate the group discussions. They were practised and impartial, and it was hoped the participants would better relate to and feel more comfortable with them than they might with staff members, and would therefore be more forthcoming and candid throughout the discussions.

The project also afforded an opportunity to fully explore and experiment with the various ‘out of the box’ customization options available for the EDS interface, which allowed us to gain a good awareness of exactly what we could and could not do. However, some of the changes we were hoping to implement went beyond these. Fortunately, members of the Library’s Systems Team were on hand with a high level of technical expertise to liaise with EBSCO support and clearly articulate what it was that we wanted to achieve. EBSCO was willing to work with us to help develop the enhancements that would address some of the issues raised.

Some of the features and functionality incorporated over time into the pre-existing version of the Discover interface were added for reasons that were entirely justified at that time. However, the needs of our users had since evolved and the 2015 review led to a reconfiguration of some features and the removal of others. Without doubt, those needs in another three years’ time will have nuances that will set them apart again, demonstrating that these reviews need to be iterative. It is important, then, to work towards a fairly regular process of review, continually seeking feedback so that we can be responsive to, perhaps even pre-emptive of, those needs.

The full findings of the project can be found in the Discover: Survey, Usability Testing and Focus Group Report.