Journal selection, rather than deselection, was once a key topic within the library literature. More than 70% of the serials review studies published in the 1970s and 80s focused on the identification and acquisition of new journals. In contrast, nearly 80% of the studies published since 2000 have emphasized deselection (cancellation).1 In fact, many academic libraries consider new journals mainly on an ad hoc basis, often in response to requests from academic departments or sales pitches from vendors. The absence of systematic procedures for the selection of new titles is a problem, since any evaluation limited to current holdings (‘renew or cancel?’) provides no means of determining (a) whether other journals, not already held, might better meet patrons’ needs, or (b) whether new subscriptions – single journals or full-text databases – might provide more cost-effective access than the subscriptions already maintained by the library.2
Our serials review at Manhattan College, completed in 2020, introduced a two-stage process whereby we first identified the journals that would best meet the teaching and research needs of the faculty (‘wanted journals’), then objectively determined which acquisition opportunities – which single-journal subscriptions, full-text databases or open access journals – would provide current access to an adequate number of wanted journals at the lowest possible cost. By decoupling journal selection from acquisition, we avoided the need to subjectively evaluate entire full-text databases, a task that is difficult due to the wide range of journals included within each database. Instead, we evaluated individual journals strictly on the basis of their content, without giving preferential treatment to those initially held by the College. In the second stage, we selected the most cost-effective acquisition opportunities through an objective procedure that considered just one criterion, cost per wanted journal and that allowed new resources to compete on an equal footing with those already in the collection.3 The methods and results of the Manhattan College serials review are described fully elsewhere.4 This brief paper focuses on the general principles underlying our methods and on the benefits of the two-stage review process.
Although the project was not intended to cut costs, it could easily have been used for that purpose. The library initially held 1,404 of the 2,717 wanted journals identified by the Manhattan College faculty, and our analysis revealed that we could have acquired the same number of wanted journals for just 37% of our initial expenditure. This potential for savings may be especially important for universities that face budget cuts due to enrolment difficulties or declining government funding.
Two terms should be defined at the start. In this context, ‘full-text journal resources’ include databases, single-journal subscriptions and open access journals. In turn, ‘databases’ are full-text databases, online collections and journal packages, whether publisher-specific (e.g. SpringerLink), subject-specific (e.g. EconLit with Full Text) or neither (e.g. EBSCO’s Academic Search Complete). Our review process did not include other types of online resources – purely bibliographic databases, for instance, or collections of newspapers and popular magazines.
Three broad approaches to journal selection can be identified in the literature of academic librarianship.5 The single-criterion approach, in widespread use before 1980, identifies the set of journals that minimizes or maximizes the value of a single criterion (e.g. cost per use or the percentage of journals with high citation impact). Because this method requires the clear specification of just one objective, it encourages librarians to think carefully about the ultimate goals of their collection-building efforts.6 (1)
The composite score approach, prominent in the 1980s but seldom used thereafter, involves the calculation of a single score that accounts for multiple factors. Each factor is weighted by its importance to the outcome. This method has a strong subjective component since there are no clear standards for the selection and weighting of the component variables.7
The multiple-criteria approach, adopted in recent decades, involves the consideration of multiple criteria without the calculation of a composite score or the explicit weighting of variables. The relevant criteria can be considered all at once, or consecutively. For example, we might first select the journals that are most often cited by the academic staff at our own institution, regardless of their overall citation impact; then select those that are rated ‘essential’ by the faculty and that meet minimum standards for cost-effectiveness; then, finally, select low-cost journals that are highly cited (overall) and included in key bibliographic indexes.8
Any of the three approaches can be used in Stage 1, depending on the needs of the institution. Apart from cost, the factors most often considered in the literature of the past decade are local use (e.g. online downloads and views), the subjective ratings of faculty or other subject experts, the number of times university staff have cited or published in the journal, inclusion in major bibliographic databases, overall citation impact, availability through the collections of other libraries and use in interlibrary loan.9
At Manhattan College, we found it appropriate to select wanted journals by using a single criterion: whether each journal was identified by the faculty of an academic department or program as important to their teaching and research. We first provided each department with a list of the journals in the appropriate subject area(s) of Journal Citation Reports (JCR) and/or the 2010 Excellence in Research for Australia (ERA) project, giving each department a selection target of either 0.3 times the number of journals in the matching JCR subject categories or 1.0 times the number of A*-rated journals plus 0.7 times the number of A-rated journals in ERA. (Evaluations of the subject categories included in both JCR and ERA demonstrated that these two standards are roughly equivalent.) Most department chairs consulted with all or many of their full-time faculty, either (a) asking them to individually choose the most important journals, then compiling a list of those most often selected, or (b) meeting as a group to collaboratively choose the most important titles. The specific selection criteria were left to the faculty in each department. Nearly all chose the journals with the highest citation impact in their fields, along with the more prominent review journals, teaching-oriented journals and practice-oriented journals. The correlations between selection status (wanted or not) and Eigenfactor (which represents the citation impact of the journal as a whole) were lowest for biology, the health professions, engineering, organizational leadership and psychology, where the faculty tended to select more practice-oriented journals.10
One advantage of selecting individual journals rather than full-text databases is that it allows for the consideration of fundamental principles that are more difficult to take into account when multiple journals are evaluated together. In the Manhattan College case, we developed our Stage 1 procedures with five principles in mind:
Each of these principles is based on research in librarianship, information science or related fields, but each is sometimes disregarded when immediate concerns such as prices, budgets and renewal deadlines are foremost in the minds of library staff. With our procedure, there is no need to consider cost during Stage 1, however, since cost-effectiveness is central to Stage 2. As discussed in the next section, cost is an attribute of the acquisition opportunity – the subscription or mechanism by which the journal is acquired – rather than an attribute of the journal itself.
Unlike Stage 1, our Stage 2 procedure is necessarily a single-criterion method: it requires the identification or construction of a single variable that represents the goal of the serials review. At Manhattan College, we chose ‘cost per wanted journal’ as our criterion. Our goal, therefore, was to identify the set of acquisition opportunities that minimized cost per wanted journal. Stage 2 can be designed to either minimize or maximize the criterion, and the use of a composite variable based on the values of several other variables is perfectly acceptable. For instance, we could have calculated ‘academic utility’ on a scale of one to ten based on citation impact, subjective assessments and other attributes, then proceeded to Stage 2 with the goal of minimizing cost per unit of academic utility. In practice, however, our calculation of cost per wanted journal was based on the assumption that each wanted journal within a particular database was of equal value.
‘Acquisition opportunities’ are central to Stage 2. Each acquisition opportunity represents the combination of a journal (a particular title) and a full-text journal resource – i.e. a single-journal subscription or a full-text database/collection/package. For instance, The Sociological Review can be acquired as a single-journal subscription, through any of three SAGE full-text databases, or through SocINDEX with Full Text or Sociology Source Ultimate. Consequently, there are six acquisition opportunities associated with the journal, with Manhattan College costs ranging from US$155 to US$1,197. (Each wanted open access journal was counted as a single acquisition opportunity with a cost of US$0.)
Stage 2 is essentially a set of five tasks, which are described more fully elsewhere:14
Steps 3, 4 and 5 should be repeated until the goals of the serials review have been met. With a zero-based procedure, there is no need for explicit deselection (cancellation) decisions, since cancellations will be made whenever a previously held journal is not selected through the Stage 2 process.
Our iterative procedure ensures that the most cost-effective resources are selected first. In each successive round, the next most cost-effective resources are selected. Steps 3, 4 and 5 can be repeated until pre-established objectives have been met or until the available funds have been spent. For instance, the goal may be to acquire a fixed percentage of the wanted journals (e.g. the same percentage initially held), to acquire all the wanted journals that can be obtained for less than a specified per-journal cost, or to see how many wanted journals can be acquired without any change in total expenditure.
The Stage 2 procedure is largely mechanical, and it is not grounded in any particular collection development principles. However, it does address a major criticism of full-text databases – that they include many journals that do not meet the library’s usual selection standards.16 Because cost per wanted journal is based solely on the number of wanted journals in each database, our procedure ensures that ‘unwanted’ journals do not enter into the evaluation of cost-effectiveness. No credit is given for titles that do not support the needs of students and faculty, and the number of unwanted journals in a database has no bearing on the results of the procedure.
Our method does not account for database content other than full-text journals, or for characteristics such as interface design or quality of bibliographic records. (2) It is therefore not appropriate for the evaluation of resources that are primarily bibliographic, or for specialized databases that provide access to images, streaming media, news articles, tests, reference materials, e-books or statistical data.
As noted in the Introduction, our two-stage review process could have achieved dramatic cost savings for Manhattan College if that had been our goal. That is, we could have ended the iterative process after 21 rounds, acquiring 1,544 of the wanted journals (i.e. more than the 1,404 initially held) and paying just 37% of our initial cost. Because expenditure reduction was not one of our goals, however, we continued for 43 rounds, achieving a 3% reduction in total cost, a 50% increase in the percentage of wanted journals held, a 35% reduction in cost per wanted journal and a substantial reduction in inequality of holdings across academic departments. Although we were concerned that the strong emphasis on wanted journals might substantially reduce the overall number of titles held, our total serials count declined by just 5%.
By decoupling the selection of wanted journals from the identification of cost-effective acquisition mechanisms, our method allows for the evaluation of each journal solely on the basis of quality. Quality may incorporate any number of locally important attributes – reputation, scholarly impact, demonstrated instructional value or importance to particular groups within the university, for instance – but factors unrelated to quality are specifically excluded from the Stage 1 review process. In particular, our method minimizes any bias associated with publisher or publisher type. All publishers compete on an equal footing. This is appropriate, since editors – not publishers – ultimately determine journal quality through the processes of manuscript solicitation, review, revision and acceptance or rejection.
Moreover, these results reveal that substantial savings can be realized when each journal is acquired in the most cost-effective way (Stage 2). It is misleading to refer to the price of any particular journal, since price is an attribute of the acquisition opportunity – not of the journal itself. For instance, the American Political Science Review can be acquired as a single-journal subscription or through any of 12 databases, at annual prices ranging from US$105 to US$1,774. Our review method capitalizes on the distinction between journals and acquisition opportunities, demonstrating that the acquisition of journals through full-text databases is in no way inconsistent with title-by-title selection. Even when each wanted journal is selected individually, package deals usually provide the most cost-effective means of acquisition. Of the 1,232 wanted journals available to Manhattan College through both full-text databases and single-title subscriptions, 88% are less expensive when acquired through a database.17
1Megan G. Kilb, Virginia B. Martin, and Tessa L. Minchew, “Time to take new measures: developing a cost-per-cited-reference metric for the assessment of e-journal collections,” Proceedings of the 2016 Charleston library conference (2016), https://docs.lib.purdue.edu/charleston/2016/cdaassess/2/ (accessed 26 May 2021) is one of the few recent studies to adopt a single-criterion approach.
2Other review methods also tend to discount these factors, however; see Ilana R. Barnes, “Incorporating usability into the database review process: new lessons and possibilities,” Proceedings of the 2013 Charleston Library Conference (2013), https://doi.org/10.5703/1288284315267 (accessed 27 May 2021); Philip M. Davis and Jason S. Price, “eJournal interface can influence usage statistics: implications for libraries, publishers, and Project COUNTER,” Journal of the American Society for Information Science and Technology 57, no. 9 (2006): 1243–1248, https://doi.org/10.1002/asi.20405 (accessed 27 May 2021); Ilana R. Stonebraker, “Measuring usability in the database review process: results from a pilot,” Journal of Library Innovation 6, no. 2 (2015): 15–34, https://sites.google.com/site/journaloflibraryinnovation/vol-6-no-2-2015 (accessed 27 May 2021).
A list of the abbreviations and acronyms used in this and other Insights articles can be accessed here – click on the URL below and then select the ‘full list of industry A&As’ link: http://www.uksg.org/publications#aa
The authors have declared no competing interests.
Jamie Webster Hastreiter, “Guidelines for periodical acquisition and budget control: an overview of selection and deselection in the small academic library,” in Operations Handbook for the Small Academic Library, ed. Gerard B. McCabe (New York: Greenwood Press, 1989), 229–238; Margaret Hawthorn, “Serials selection and deselection: a survey of North American academic libraries,” The Serials Librarian 21, 1 (1991): 29–45, DOI: https://doi.org/10.1300/J123v21n01_03 (accessed 26 May 2021); William H. Walters and Susanne Markgren, “Zero-based serials review: an objective, comprehensive method of selecting full-text journal resources in response to local needs,” The Journal of Academic Librarianship 46, no.5 (2020): 102189, Appendix A, DOI: https://doi.org/10.1016/j.acalib.2020.102189 (accessed 26 May 2021).
Steven Shapiro, “Database cancellation: the ‘hows’ and ‘whys,’“ Journal of Electronic Resources Librarianship 24, no. 2 (2012): 154–156, DOI: https://doi.org/10.1080/1941126X.2012.684564 (accessed 26 May 2021).
Arthur W. Hafner, “Primary journal selection using citations from an indexing service journal: a method and example from nursing literature,” Bulletin of the Medical Library Association 64, no. 4 (1976): 392–401, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC199258/ (accessed 26 May 2021); Donald H. Kraft and T.W. Hill Jr., “A journal selection model and its implications for a library system,” Information Storage & Retrieval 9, no. 1 (1973): 1–11, DOI: https://doi.org/10.1016/0020-0271(73)90003-X (accessed 26 May 2021); Donald A. Windsor, “Rational selection of primary journals for a biomedical research library: the use of secondary journal citations,” Special Libraries 64, no. 10 (1973): 446–451, https://scholarworks.sjsu.edu/sla_sl_1973/8/ (accessed 7 June 2021).
S.M. Dhawan, S.K. Phull, and S.P. Jain, “Selection of scientific journals: a model,” Journal of Documentation 36, no. 1 (1980): 24–32, DOI: https://doi.org/10.1108/eb026689 (accessed 26 May 2021); Vesna Oluić-Vuković and Nevenka Pravdić, “Journal selection model: an indirect evaluation of scientific journals,” Information Processing & Management 26, no. 3 (1990): 413–431, DOI: https://doi.org/10.1016/0306-4573(90)90100-G (accessed 26 May 2021); Nevenka Pravdić and Vesna Oluić-Vuković, “Application of overlapping technique in selection of scientific journals for a particular discipline: methodological approach,” Information Processing & Management 23, no. 1 (1987): 25–32, DOI: https://doi.org/10.1016/0306-4573(87)90036-7 (accessed 26 May 2021).
Diane Cunningham, “Assessing and selecting journals for your library’s core list,” Information Outlook 7, no. 11 (2003): 40–42, 45, https://scholarworks.sjsu.edu/sla_io_2003/11/ (accessed 7 June 2021); Hilary M. Davis and Gregory K. Raschke, “Data informed and community driven: using data and feedback loops to manage a journal review and cancellation policy,” Against the Grain 29, no. 2 (2017): 12, 14–15, 20, https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=7744&context=atg (accessed 7 June 2021); Diane Dawson, “A triangulation method to dismantling a disciplinary ‘big deal,’” Issues in Science & Technology Librarianship 80, no. 2 (2015), DOI: https://doi.org/10.5062/F4610X9H (accessed 26 May 2021); Harriet Lightman and Sabina Manilov, “A simple method for evaluating a journal collection: a case study of Northwestern University Library’s economics collection,” The Journal of Academic Librarianship 26, no. 3 (2000): 183–190, DOI: https://doi.org/10.1016/S0099-1333(00)00097-5 (accessed 26 May 2021); Peter Zhang and Ashley Zmau, “Review in motion: multi-year electronic resources review at UTA libraries,” Proceedings of the 2015 Charleston library conference (2015), https://docs.lib.purdue.edu/charleston/2015/collectiondevelopment/30/ DOI: https://doi.org/10.5703/1288284316270 (accessed 26 May 2021).
William H. Walters and Susanne Markgren, “Do faculty journal selections correspond to objective indicators of citation impact? Results for 20 academic departments at Manhattan College,” Scientometrics 118, no. 1 (2019): 321–337, DOI: https://doi.org/10.1007/s11192-018-2972-7 (accessed 26 May 2021).
Samuel C. Bradford, “Sources of information on specific subjects,” Engineering: An Illustrated Weekly Journal 137, 3550 (1934): 85–86, reprinted in the Journal of Information Science 10, no. 4 (1985): 176–180, DOI: https://doi.org/10.1177/016555158501000407 (accessed 27 May 2021); Thomas E. Nisonger, “The ‘80/20 rule’ and core journals,” The Serials Librarian 55, no. 1–2 (2008): 62–84, DOI: https://doi.org/10.1080/03615260801970774 (accessed 27 May 2021).
Per Ahlgren and Ludo Waltman, “The correlation between citation-based and expert-based assessments of publication channels: SNIP and SJR vs. Norwegian quality assessments,” Journal of Informetrics 8, no. 4 (2014): 985–996, DOI: https://doi.org/10.1016/j.joi.2014.09.010 (accessed 27 May 2021); E. Susanna Cahn, “Journal rankings: comparing reputation, citation and acceptance rates,” International Journal of Information Systems in the Service Sector 6, no. 4 (2014): 92–103, DOI: https://doi.org/10.4018/ijisss.2014100106 (accessed 27 May 2021); Peter Haddawy et al., “A comprehensive examination of the relation of three citation-based journal metrics to expert judgment of journal quality,” Journal of Informetrics 10, no. 1 (2016): 162–173, DOI: https://doi.org/10.1016/j.joi.2015.12.005 (accessed 27 May 2021); Steven A. Knowlton, Adam C. Sales, and Kevin W. Merriman, “A comparison of faculty and bibliometric valuation of serials subscriptions at an academic research library,” Serials Review 40, no. 1 (2014): 28–39, DOI: https://doi.org/10.1080/00987913.2014.897174 (accessed 27 May 2021); William H. Walters, “Do subjective journal ratings represent whole journals or typical articles? Unweighted or weighted citation impact?”, Journal of Informetrics 11, no. 3 (2017): 730–744, DOI: https://doi.org/10.1016/j.joi.2017.05.001 (accessed 14 June 2021); Walters and Markgren, “Do faculty journal selections correspond to objective indicators.”
Katherine Corby, “Constructing core journal lists: mixing science and alchemy,” Portal: Libraries and the Academy 3, no. 2 (2003): 207–217, DOI: https://doi.org/10.1353/pla.2003.0029 (accessed 27 May 2021); John Ewing, “Measuring journals,” Notices of the American Mathematical Society 53, no. 9 (2006): 1049–1053, https://www.ams.org/notices/200609/comm-ewing.pdf (accessed 27 May 2021); Alan Singleton, “Why usage is useless,” Learned Publishing 23, no. 3 (2010): 179–184, DOI: https://doi.org/10.1087/20100301 (accessed 27 May 2021); William H. Walters, “Beyond use statistics: recall, precision, and relevance in the assessment and management of academic libraries,” Journal of Librarianship and Information Science 48, no. 4 (2016): 340–352, DOI: https://doi.org/10.1177/0961000615572174 (accessed 27 May 2021); Alex Wood-Doughty, Ted Bergstrom, and Douglas G. Steigerwald, “Do download reports reliably measure journal usage? Trusting the fox to count your hens?”, College & Research Libraries 80, no. 5 (2019): 694–719, DOI: https://doi.org/10.5860/crl.80.5.694 (accessed 27 May 2021).
Sarah Anne Murphy, “The effects of portfolio purchasing on scientific subject collections,” College & Research Libraries 69, no. 4 (2008): 332–40, DOI: https://doi.org/10.5860/crl.69.4.332 (accessed 27 May 2021); Jonathan Nabe and David C. Fowler, “Leaving the ‘big deal’ … five years later,” The Serials Librarian 69, no. 1 (2015): 20–28, DOI: https://doi.org/10.1080/0361526X.2015.1048037 (accessed 27 May 2021); Brian A. Quinn, “The impact of aggregator packages on collection management,” Collection Management 25, no. 3 (2001): 53–74, DOI: https://doi.org/10.1300/J105v25n03_05 (accessed 27 May 2021); Fei Shu et al., “Is it such a big deal? On the cost of journal use in the digital era,” College & Research Libraries 79, no. 6 (2018): 785–98, DOI: https://doi.org/10.5860/crl.79.6.785 (accessed 27 May 2021); Karla L. Strieb and Julia C. Blixrud, “Unwrapping the bundle: an examination of research libraries and the ‘big deal’,” Portal: Libraries and the Academy 14, no. 4 (2014): 587–615, DOI: https://doi.org/10.1353/pla.2014.0027 (accessed 27 May 2021).
William H. Walters and Susanne Markgren, “Comparing the prices of commercial and nonprofit journals: a realistic assessment,” Portal: Libraries and the Academy, 21, no. 2 (2021): 389–410, https://www.muse.jhu.edu/article/787873 DOI: https://doi.org/10.1353/pla.2021.0021 (accessed 27 May 2021).