Introduction

Traditionally, library collection, management and development focused on obtaining scholarly literature required for research and instruction. It did not need to concern itself with where researchers and scholarly authors within the institution published or on capturing the scholarly outputs of the institution outside of theses and dissertations. This situation continued in the early days of open access publications and the development of institutional repositories. During this time, the two teams responsible for collection development/subscription management and open access could co-exist in isolation, using different tools to build their own separate collections. In recent years, academic institutions and libraries became keener to find ways to record and retain their scholarly outputs as both a measure of overall research capacity and scholarship as well as a measure of scholarly engagement.

In 2008, TERMS launched as a framework approach to help library workers become more familiar with a life cycle of electronic resource management. The initial vision expanded upon Pesch’s electronic resources cycle and focused on the day-to-day activities of electronic resource management. It consisted of six constituent parts further subdivided into sections (see Figure 1).

Figure 1 

TERMS Version 1 (originally titled ‘The six TERMs’)

Version 2.0 of TERMS was published as an open access book in 2019. It featured revised sections and widened its reach beyond journals and databases to include more content types such as multimedia, e-books and open access. TERMS 2.0 was designed to work on the Pareto Principle where 80 per cent of the work is invested in 20 per cent of the content managed. Each subsection was further divided into three parts, basic, complex and open access, although the three parts may be interrelated or share the same principles of electronic resource management (see Figure 2).

Figure 2 

Subdivision of TERMS 2 Sections. Published under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license

Published just a year after the launch of Plan S, TERMS was only able to predict the effect Plan S might have on e-resource management. In this article we take this a step further by using the modular framework of TERMS to show how it can be adapted so that library e-resources/subscription management departments can address the principles of Plan S and the transition to open scholarship in order to embed them into the management of the e-resources workflow. Of particular note is the structure of many library teams. Although some libraries or institutions have a single member of staff or team to manage the entire workflow, many organizations have separate teams to select and acquire resources, to implement and to analyse. Furthermore, there may be a completely separate open access or repository team, who may not be located in the main library setting.

Our hope with TERMS, and this article in particular, is to combine the knowledge of the two teams to help both groups understand the full range of processes and to gain a greater appreciation of the work involved in each area. The intent of this article is to highlight the open access management discussion sections of the book and expand upon areas of growth and development of these processes in libraries. This will assist in developing an overall comprehension of where data intersection happens, where problems are likely to arise and suggest future explorations to be made. This article differs from the book in that it explores more directly the relationship of the Plan S framework in consideration with the framework of TERMS.

Much has been written about Plan S. Its launch in September 2018 brought about a paradigm shift in the negotiation of large-scale journal ‘big deals’. The ten principles of Plan S are listed in Table 1.

Table 1

The ten principles of Plan S

Plan S principles

1.Authors or their institutions retain copyright to their publications
2.The funders will develop robust criteria and requirements for the services that high-quality open access journals must provide
3.Where high-quality open access journals or platforms do not yet exist, funders will provide incentives to establish and support them when appropriate, in a coordinated way
4.Fees are covered by the funders or research institutions, not by individual researchers
5.Diversity of business models for open access journals and platforms
6.All stakeholders encouraged to align their strategies, policies, and practices, notably to ensure transparency
7.Above applies to all scholarly publications, although monographs will take longer
8.Hybrid publishing not supported, unless they are approved as transformative agreements
9.Funders will monitor compliance
10.Assessment measures that value the merit of research outside of journal metrics

Although not adopted by all funders, Plan S is a game changer. Consortia in cOAlition S funder countries and beyond are now replacing the big deal with transformative ‘read and publish’ agreements, where institutions are provided with access to all titles and their authors may publish open access at no additional cost (within predefined limits) or reduced charges. In addition, alternative models are emerging that allow subscription content to open up without an article processing charge (APC). These include but are not limited to: PLOS’ Community Access Publishing, Annual Reviews subscribe to open and, most recently, Knowledge Unlatched S2O schemes with Berghahn Press, Pluto Press and the International Water Association journals.

Transformative read and publish agreements and open content deals require library subscriptions and open access teams to work together in order to share data on the read (subscription costs) and publish (financial information on institutional outputs) elements of the agreements.

Furthermore, during the first days of the Covid-19 pandemic, librarians and scholarly publishers quickly realized that, in a remote learning environment, being able to provide (open) content was key for the educational transition to happen. Many paywalls were temporarily dropped to allow for greater access to all types of scholarly content. After over six months in this environment, the use of open scholarly content surged. A secondary impact of the global pandemic has been significant losses of funding/revenue at many institutions as well as the detrimental financial impact on global economies. These two conditions have resulted in many higher education librarians needing to find new pathways to finance and support the research and scholarly endeavors at their respective universities.

TERMS 1. Investigating new content

Read and publish agreements are extremely complex to investigate. Library teams need to understand not only whether the content fits the scope of the local collection, management and development (CMD) policy, but also if their researchers actively publish in these titles. In some cases it must also be determined if researchers serve on the editorial boards of the journals under investigation. Most institutions do want to support and maintain journals where there is a strong editorial board presence. To determine the total cost of an agreement, subscription levels as well as all institutional APC costs should be gathered to understand the potential local value of the agreement. This involves administering or at least understanding the costs of internal APC budgets (e.g. dedicated library, research departments or faculty budgets) as well as external funding agency budgets, such as block grants. ‘Subscribe to open’ agreements are more straightforward to review but will still require the library teams to understand their local scholarly outputs to ensure the most appropriate content is being supported through a subscribe to open deal.

Part of the transparency Plan S aims to achieve includes transparency within a library’s processes and planning documentation. Traditionally, when selecting content for purchase, the CMD policy (or similar) acts as a guide. However, for many libraries, these plans have not been updated or revised to incorporate local scholarly output or open access in the context of collection development.

When trying to calculate APC spend from an institution, libraries need to be aware of ‘APCs in the wild’. These APCs, paid from resources outside of the library and without its knowledge, can be difficult to trace. Some are traceable in conjunction with the Finance Department through a deep dive of the finance system by looking for the institutional cost code used for APC payments. However, in other cases, the authors themselves may have paid the charges directly, the APC has been attributed to the institution in error or the author has since left. This can often account for discrepancies between what the library and the publisher believe have been published by the institution.

Pure open access content has been subject to very high price rises in recent years. The transparency achieved by the major consortia negotiations has led to a more immediate pushback by the scholarly information community when pricing structures seem awry. This can be seen by the number of renegotiations of journal big deal packages in the United States in the past two years alone. For the purposes of investigating pure open access content, it is important to follow existing selection criteria (as well as cherry-picking from other existing selection guidelines to ensure that only quality content is acquired). In addition, these criteria should be supplemented with the following checks:

  • are titles listed in the Directory of Open Access Journals (DOAJ), which includes peer-reviewed pure gold journals or ISSN ROAD, the Directory of Open Access scholarly Resources, which lists DOAJ titles with ISSN?
  • OASPA and/or COPE membership is a further sign that a journal or publisher fulfils quality criteria. Clear explanations on copyright licensing and a transparent pricing structure are signs of a reputable open access publisher
  • does the journal website clearly state what the APC costs are and how the money is utilized to achieve publication of articles?
  • does the journal have a readily available publications ethics statement?
  • does the journal website clearly indicate what software is used for plagiarism checking and is the review process clearly delineated?

There has been a proliferation of diamond open access journals in recent times. Diamond open access journals require no APC payments and are readily available to be read at no cost. They are usually funded through membership schemes, grant funding, annual collective funding drives, donations and in-kind support. Most of these publications are volunteer run and managed. A 2021 research report commissioned by cOAlition S indicates there are approximately 29,000 diamond open access journals but only a third of these are registered in the DOAJ. The journals appear to be small in scope, come from numerous regions around the world and publish fewer articles than journals from legacy commercial scholarly publishers. They are predominantly focused on humanities and social sciences disciplines in stark contrast to science and medical areas of study.

Although these titles are free to publish and free to read, costs are still incurred by the publisher. This gives rise to the so-called ‘free rider’ problem, where researchers publish with diamond open access publishers, but their institution offers no contribution via supporter memberships or subscriptions. A library may want to perform a review of faculty publishing to see if there are specific open access publishers in which they regularly publish in order to see if supporter memberships would be appropriate. However, it may not be clear how much financial support can be provided to some diamond open access journals, especially if they are directly affiliated with a specific University or consortia body.

Libraries, such as Imperial College have already integrated open access into their CMD policies. However, some libraries, such as the University of Amsterdam, have gone further by including a diamond open access fund to support journals in which their researchers have published.

Whatever the colour of open access, there is no such thing as a no-cost resource. The open access diamond journals study recommends that cOAlition S, amongst other stakeholders (including libraries), ‘collaborate to develop national and international funding strategies for OA diamond publishing for the next five years. A strategy would specify what to fund in two areas: operations and development’.

Just as libraries support the infrastructure for traditional resources (MARC records, shelf-ready books or even shelving space and binding), there is a need to support the infrastructure behind open. Lewis suggests a 2.5% commitment from library budgets, although Neylon suggests that this figure is both too ambitious and not ambitious enough, encouraging libraries to start small. Some libraries see this commitment as supporting a transition to an open research culture.

In summary, when beginning to explore what open access opportunities to support, libraries should first reflect on their CMD policies to see what priorities have already been pre-established, review faculty publications to identify providers and review opportunities with diamond open access journals to recommend to faculty these publications as alternatives to APC funding with legacy publishers.

TERMS 2. Purchasing and licensing

Individual members of cOAlition S will monitor compliance and a timeline for the implementation of Plan S. Therefore, regional and national consortia will often have their own requirements for transformative open access agreements. These will have been produced in part to make sure read and publish agreements are compliant with local funder policy.

TERMS 2.0 recommended a set of negotiation deal breakers for the purchasing and licensing of journal content. This was largely based on ensuring the successful implementation of a new resource into library discovery systems. In a Plan S world, a second set of deal breakers, such as funder compliance, is required. In this respect, both ESAC (Efficiency and Standards for Article Charges) and Jisc recommendations can be adapted for this purpose. This enables new agreements to be assessed against a checklist or framework to establish whether they are transformative, hybrid or pure gold – and whether this matters to an institution. Furthermore, Galvan takes a novel approach of aligning institutions’ licensing principles with their strategic plan to align values alongside their strategic development of content. This echoes the Plan S Principle to ensure transparency through the alignment of local strategies, policies and practices. The ESAC initiative also maintains an essential registry of transformative agreements by consortia, which is a useful starting point to see what is available with regards to license terms and what may be on the horizon. This registry is also a useful tool when investigating potential deals to pursue in order to understand if a provider/publisher has participated in a read and publish or subscribe to open deal previously. Reviewing the registry also provides insights into pricing being used by other consortia and what may be possible with any given provider/publisher.

In order to successfully transition to open access, all institutions, including those with a teaching focus, need to participate. For these institutions, finding evaluation techniques beyond the measurement of cost per use is key. Marques and Stone found that there is a benefit for these institutions regarding cost per use. Furthermore, the Statewide California Electronic Library Consortium (SCELC) noted that a key participation point for a teaching institution may be the ability to deposit articles into local repositories.

TERMS 3. Implementation

Setting up open access content might appear to be straightforward but may actually require shifts in how resource implementation is undertaken. Where a read and publish agreement (or any agreement featuring a reduction in APCs) has been achieved, the first focus of implementation is branding and marketing to the local community. It is extremely important to get the word out clearly and consistently that a new option for open access publishing is available and that costs may be covered. In addition, centering the library as the co-brand with the publisher/vendor helps to emphasize the key role the library plays within the institution, signifying that this open access potential has been vetted and reviewed by the local information professionals.

Regarding technical implementation, it is not enough to make open access content freely available and then assume that readers will find it. The metadata associated with open access content needs to be as extensive as possible in order to reach all of the possible access points that researchers might use. This links the work of open access and e-resource management teams. Open access content should be enabled in the local catalog, discovery system and repository for locally authored works. Teams should also ensure it is available through browser extensions such as OA Button, Lean Library, EndNote Click etc. Therefore, it is worth researching which browser extensions suit an institution’s environment best – for the discovery of both local open access content and external content.

In order to assess read and publish or subscribe to open agreements, creating and maintaining article level metadata of local research outputs is critical. Without a codified practice for capturing article metadata in place, it will become increasingly difficult to review and perform qualitative and quantitative reporting and assessment in future years. It just is not possible at this juncture to rely solely on commercially available or publisher provided metadata. For example, a 2019 study indicated that the metadata used by numerous humanities and social science journals did not meet the standards put forth by Plan S. It is important to develop, and consistently apply local metadata standards for open access content. These also need to meet cOAlition S recommended practices.

The adoption of Plan S principles makes a clear case for the use of persistent identifiers (PIDs) as a core communication practice to be instituted. Library teams, alongside legacy commercial publishers, are encouraged to adopt PIDs to make open access content more retrievable. Many, particularly in scientific, technical and medical (STM) disciplines, are doing a fairly good job capturing ORCiDs and DOIs of all content elements (data, figures, tables, charts and publications) but there is much to be captured beyond these elements such as organizations, grants and licenses. The Research Organization Registry (ROR) is emerging as a key element in having a transparent identifier for organizations and could be extremely helpful when evaluating or tracking research and scholarship from an institution.

Lastly, it should be noted that library information systems (LIS) are still too oriented towards the standards required for print publication and lag behind in the incorporation of PIDs and metadata for open access content. It is imperative that libraries work with LIS providers to encourage them to incorporate these fields in order to allow for greater discovery of all open content.

TERMS 4. Troubleshooting

General wisdom has it that freely available content is readily discoverable and available for widespread use. However, anyone who has worked with open access content consistently knows this is a misconception. Most of the problems with open access content are as a result of metadata discrepancies, either due to inaccurate data supplied by publishers or key metadata fields being lost in the system when passed on to suppliers and vendors. These metadata issues vary and cause confusion over the determination of the version of record of any given article.

Figure 3 shows an example of two articles from the same publisher. Both appear to be different but are in fact the same article referenced from two different URLs. Both are presented in a commercial library discovery system search for the term ‘Unpaywall’, so provide ready access to the content, but add to confusion by presenting different ‘discoverable’ metadata. This could lead to incorrect citations, particularly if the discovery system is used to generate the citation.

Figure 3 

Screenshot of discovery system showing two discovery points for the exact same article appearing slightly different due to the metadata supplied

This illustrates how metadata transferred from the hosting site to the discovery system can be distorted. Therefore, it is important for library teams managing open content to report issues such as these back to the publisher/vendor to help remove confusion from the scholarly record.

An emerging issue for open access titles is found in situations where they transfer from one publisher, scholarly society or higher education institution to another. Due to the openness of the content, the protocols put in place by the Transfer Code of Practice are not always followed or even known by the parties involved with this work. This leads to discovery dead-ends where communication over the transfer is not widely distributed in the information chain. Members of the Project Transfer board understand the need to expand application of the Transfer protocol more broadly to open access providers and platforms. However, perhaps the most concerning issue to emerge is that the transfer of titles can lead to the loss of access by the paywalling of open content on the new platform. In addition, the application of the correct Creative Commons license does not always transfer accurately or could be lost entirely. This is why it is critical to have direct deposit of all gold open access articles into a repository, as these problems often result in library teams having to review the article submission in an institutional repository to determine which Creative Commons license is in use.

Articles published by authors receiving Plan S compliant research funding must be made open access and should acknowledge the funding source. However, in most article submission systems, the language regarding the funding agency is not made clear to the submitting author(s). Corresponding or primary authors often indicate a funding source because a grant underwrote the research project but is not necessarily being used to ‘pay’ for the article publication. This was illustrated by Marques and Stone in their analysis of the UK Springer Compact Agreement Pilot, where no further analysis could be made of funders’ data ‘because of the insufficient quality of the funder metadata’.

Legacy publishers, in conjunction with funding agencies and third-party vendors of submission systems, need to develop more accurate language regarding funders and fund reporting within their submission systems.

TERMS 5. Assessment

Assessment of the value of big deal or individual journal subscriptions traditionally center around cost per download (CPD). In a pure open access world this method of assessment becomes obsolete as there is no download (subscription) cost to measure. However, in transformative read and publish agreements, CPD still has a role to play for the read part of the agreement, whereas the publish costs need to be assessed in a different way.

Any assessment of these agreements has a different emphasis for institutions who publish large numbers of research articles versus institutions with a teaching and learning emphasis who might publish relatively few articles. Evidence is now beginning to be published in this area as the first read and publish agreements are analysed.

Regarding the ‘publish’ element, the notion of APC cost avoidance can be used to assess the value of a transformative agreement. This is calculated by subtracting the total value of APCs for articles published from the value of the publish element of the agreement. An institution that publishes more open access articles annually than their ‘publish’ costs in the agreement is said to have ‘offset’ their APC expenditure. This method also allows the average APC to be calculated by dividing the publish costs by the number of APCs published and comparing this to the average APC for that publisher. A similar method is also being used by the European Universities Association (EUA) to assess costs at an international level.

This calculation can be taken one stage further; if the value of APCs published by an institution is greater than the combined read and publish fee, then the institution can be said to have offset the entire agreement and might be considered to have ‘flipped’ the agreement to open access (for a given year, or for the full duration of the agreement).

To assess the ‘read’ element of the agreement, the ‘adjusted CPD’ can be calculated by measuring the subscribed title usage (excluding open access usage) against the read element of the fee, minus the value of APCs published above the publish element. Adjusted CPD reaches zero if the institution flips the agreement.

By assessing these agreements in this way, research intensive institutions can measure the value of the ‘publish’ element by comparing values against the average APC. For teaching institutions, CPD can effectively be reduced even if a small number of articles are published, which can start to reduce the adjusted value of the CPD.

However, in order to truly assess the value of these agreements, to prevent them from becoming ‘business-as-usual’ and to follow the principles of Plan S, it is important to look at the national and international picture. For example, EUA reports and data made available by OpenAPC at Bielefeld University. OpenAPC ‘releases datasets on fees paid for open access journal articles by universities and research institutions under an open database license’.

TERMS 6. Preservation and sustainability

Unless appropriately preserved, there is great concern over the potential loss of open scholarship. For example, Laakso, Matthias and Jahn revealed a disturbing trend in open scholarship not being maintained or available for a variety of reasons. The authors posited that those engaged in open access publishing within the small society and learning communities need to devote more consideration to how to retain scholarly outputs. Preservation platforms such as CLOCKSS or Internet Archive Scholar are poised to play a greater role but have to be approached by the content providers to do so.

There is a role here for the library. Librarians at every institution should think deeply about the preservation and sustainability question for all online content but especially about the proliferation of open access content. Firstly, some read and publish agreements are written in a way that removes post-cancellation rights or local preservation of content and this should be challenged at the negotiation stage. Secondly, serving the long-term needs of the local community is a priority. Local CMD policies or guidelines often drive what is to be preserved or sustained at each institution. Therefore, it is imperative that these policies should encompass and include all local scholarly outputs. However, the issue of preservation is wider than capturing local scholarship within the repository. Indeed, depositing work in a repository is not preservation. Therefore, alternatives should be sought, but there are also concerns that using some systems effectively move open access content back behind a paywall.

Sustainable access to subscription content can be approached in various ways but there is growing interest in comprehending how much subscribed content may already be available as open access. To this end, Unpaywall has seen a significant interest in their new evaluation tool, Unsub. However, this is not quite the definitive response in regard to understanding this aspect of the scholarly publishing environment.

As discussed above, one notable principle of Plan S is the transparency of costs. Through tools like ESAC’s market watch and the SPARC big deal cancellation tracking, librarians have much more information available to them. Coupling this information with the amount of open access content provided by Unsub, helps librarians renegotiate or break up journal package deals to provide journal content more sustainably to their academic communities.

At this point, the life cycle circles back around to the considerations outlined in the first section of TERMS especially in regard to CMD policies. Some disciplines are well served with content coming mostly from open access outputs, preprint repository access and inter-library loans (ILL). Although Olsson et al. noted some researchers have strong concerns with ILL as an alternative to their cancellation of the Elsevier agreement. Other disciplines require participation in subscribe to open funding models, membership models or through transformative read and publish agreements. These are the conversations that need to be held institution-wide in order to determine what is most sustainable for each discipline offered.

Lastly and most importantly, libraries should become familiar with the Plan S Rights Retention Strategy as this is essential for understanding how preservation of open access content can move forward and which version of the article from the legacy scholarly publishers can be preserved. As Hinchliffe observes, most legacy publishers appear less than keen on accepting the implementation plan laid out by cOAlition S in regard to their rights retention strategy and it will be interesting to see what the future of this funder mandate may be.

The next major themes

As noted above, CPD metrics become obsolete if content becomes open. However, usage is still important. Instead of measuring whether subscribed titles are read within an institution, the reach of the institution’s authored articles needs to be measured. Geolocation of an institution’s usage is one way of measuring impact by looking at open access usage by region and/or country. However, there are pros and cons to geolocation. A positive illustration is the 2020 research from Springer Nature and COARD (Collaborative Open Access Research and Development). The research analysed usage of open access books, including where the access originated by recording the IP range. The report demonstrated that open access books showed a higher diversity of geographical usage for open access versus paywalled content and that this usage was increasing for low-income or lower-middle-income countries, including a high number of countries in Africa.

However, there are concerns about geolocation technology being used to limit open access usage to a specific region and/or country known as ‘geo-blocking either by a publisher or by a governmental body’. Other concerns raised include personal data being used for marketing campaigns, to predatorily solicit content, and to track how usage is performed by individuals.

There is a role for non-traditional metrics, if used responsibly and in tandem with other success indicators, but open standards need to be enacted as a way to ensure ethical uses of these metrics. For example, non-traditional metrics can give an early indicator about the impact of a publication on social media and policy tracking. However, these metrics can be very dependent on the scholar’s own use of social media and awareness of self-promotion. In addition, high ‘scores’ in non-traditional metrics will not necessarily equate to high usage and citations.

Open access for books is a developing area. Progress has been relatively slow so far, with only a handful of early adopters in Austria, the Netherlands, Switzerland and the Wellcome Trust in the UK. However, a number of initiatives and funder policies are in progress. Not least is Plan S, which will consider guidelines for monographs and book chapters towards the end of 2021. Nevertheless, there is still much to do regarding cultural change, not just for authors, but also for library acquisitions, sustainable business models and the development of an interoperable infrastructure to facilitate the transition to open for books.

Finally, and perhaps most importantly, the equity of open access content creation should be the foremost concern for everyone involved with open content production. For example, research by Olejniczak and Wilson indicates that, regardless of green or gold open access, or whether APCs were charged or not, the majority of open access content produced in the United States is by men employed at prestigious universities who have earned later career status. This study gives us pause to reflect whether the open access models we are currently engaged in live up to the ideals set forth by the Budapest Open Access Initiative. For as long as it has taken us to get to this point with open access, there are further advancements and alterations needed for open access scholarship to attain an equity of representation and a diversity of voices being heard.

Conclusion

This article provides areas of consideration and resources for those embarking on streamlining open access management within their respective organizations. It has outlined the major issues and concerns on the topic as well some ways to respond to them. However, there are constant developments and changes in this landscape, which require responses that have probably been missed by the time this article reaches publication.

In recognizing the thirty-year history of open access content, as information professionals, we have come a long way since those first nascent websites, journals and preprint services. Yet there is still much more to do with open access as part of the new reality, particularly with the massive move by all higher education institutions to remote learning paradigms, and this is likely to be sustained post-pandemic.

Our countries now face significant financial recessions. Therefore, the content provision models of electronic resources and access we have been engaged with up to this point are simply not sustainable in the years to come. There is a desire to form partnerships and find new paradigms for scholarly content provision that serve the world and not just shareholders. However, the Pareto principle is certain to apply in a Plan S world, where large transformative agreements are likely to take up far more staff and financial resources than smaller subscribe to open and community agreements. That said, now is not the time to lean into the ‘easy’ work but to continue to strive to find new and more equitable models of open access creation and dissemination. This will mean taking risks on emerging models that attempt to achieve a diversity of scholarship that is open to all.

In many ways, we are just at the beginning of reimagining and comprehending what open scholarship is and can be going forward.