Introduction

Journal selection, rather than deselection, was once a key topic within the library literature. More than 70% of the serials review studies published in the 1970s and 80s focused on the identification and acquisition of new journals. In contrast, nearly 80% of the studies published since 2000 have emphasized deselection (cancellation). In fact, many academic libraries consider new journals mainly on an ad hoc basis, often in response to requests from academic departments or sales pitches from vendors. The absence of systematic procedures for the selection of new titles is a problem, since any evaluation limited to current holdings (‘renew or cancel?’) provides no means of determining (a) whether other journals, not already held, might better meet patrons’ needs, or (b) whether new subscriptions – single journals or full-text databases – might provide more cost-effective access than the subscriptions already maintained by the library.

Our serials review at Manhattan College, completed in 2020, introduced a two-stage process whereby we first identified the journals that would best meet the teaching and research needs of the faculty (‘wanted journals’), then objectively determined which acquisition opportunities – which single-journal subscriptions, full-text databases or open access journals – would provide current access to an adequate number of wanted journals at the lowest possible cost. By decoupling journal selection from acquisition, we avoided the need to subjectively evaluate entire full-text databases, a task that is difficult due to the wide range of journals included within each database. Instead, we evaluated individual journals strictly on the basis of their content, without giving preferential treatment to those initially held by the College. In the second stage, we selected the most cost-effective acquisition opportunities through an objective procedure that considered just one criterion, cost per wanted journal and that allowed new resources to compete on an equal footing with those already in the collection. The methods and results of the Manhattan College serials review are described fully elsewhere. This brief paper focuses on the general principles underlying our methods and on the benefits of the two-stage review process.

Although the project was not intended to cut costs, it could easily have been used for that purpose. The library initially held 1,404 of the 2,717 wanted journals identified by the Manhattan College faculty, and our analysis revealed that we could have acquired the same number of wanted journals for just 37% of our initial expenditure. This potential for savings may be especially important for universities that face budget cuts due to enrolment difficulties or declining government funding.

Two terms should be defined at the start. In this context, ‘full-text journal resources’ include databases, single-journal subscriptions and open access journals. In turn, ‘databases’ are full-text databases, online collections and journal packages, whether publisher-specific (e.g. SpringerLink), subject-specific (e.g. EconLit with Full Text) or neither (e.g. EBSCO’s Academic Search Complete). Our review process did not include other types of online resources – purely bibliographic databases, for instance, or collections of newspapers and popular magazines.

Stage 1: title-by-title identification of wanted journals

Three broad approaches to journal selection can be identified in the literature of academic librarianship. The single-criterion approach, in widespread use before 1980, identifies the set of journals that minimizes or maximizes the value of a single criterion (e.g. cost per use or the percentage of journals with high citation impact). Because this method requires the clear specification of just one objective, it encourages librarians to think carefully about the ultimate goals of their collection-building efforts. ()

The composite score approach, prominent in the 1980s but seldom used thereafter, involves the calculation of a single score that accounts for multiple factors. Each factor is weighted by its importance to the outcome. This method has a strong subjective component since there are no clear standards for the selection and weighting of the component variables.

The multiple-criteria approach, adopted in recent decades, involves the consideration of multiple criteria without the calculation of a composite score or the explicit weighting of variables. The relevant criteria can be considered all at once, or consecutively. For example, we might first select the journals that are most often cited by the academic staff at our own institution, regardless of their overall citation impact; then select those that are rated ‘essential’ by the faculty and that meet minimum standards for cost-effectiveness; then, finally, select low-cost journals that are highly cited (overall) and included in key bibliographic indexes.

Any of the three approaches can be used in Stage 1, depending on the needs of the institution. Apart from cost, the factors most often considered in the literature of the past decade are local use (e.g. online downloads and views), the subjective ratings of faculty or other subject experts, the number of times university staff have cited or published in the journal, inclusion in major bibliographic databases, overall citation impact, availability through the collections of other libraries and use in interlibrary loan.

At Manhattan College, we found it appropriate to select wanted journals by using a single criterion: whether each journal was identified by the faculty of an academic department or program as important to their teaching and research. We first provided each department with a list of the journals in the appropriate subject area(s) of Journal Citation Reports (JCR) and/or the 2010 Excellence in Research for Australia (ERA) project, giving each department a selection target of either 0.3 times the number of journals in the matching JCR subject categories or 1.0 times the number of A*-rated journals plus 0.7 times the number of A-rated journals in ERA. (Evaluations of the subject categories included in both JCR and ERA demonstrated that these two standards are roughly equivalent.) Most department chairs consulted with all or many of their full-time faculty, either (a) asking them to individually choose the most important journals, then compiling a list of those most often selected, or (b) meeting as a group to collaboratively choose the most important titles. The specific selection criteria were left to the faculty in each department. Nearly all chose the journals with the highest citation impact in their fields, along with the more prominent review journals, teaching-oriented journals and practice-oriented journals. The correlations between selection status (wanted or not) and Eigenfactor (which represents the citation impact of the journal as a whole) were lowest for biology, the health professions, engineering, organizational leadership and psychology, where the faculty tended to select more practice-oriented journals.

One advantage of selecting individual journals rather than full-text databases is that it allows for the consideration of fundamental principles that are more difficult to take into account when multiple journals are evaluated together. In the Manhattan College case, we developed our Stage 1 procedures with five principles in mind:

  1. Quality of content – not cost, interface design or supplementary features – should be the primary criterion when selecting resources for the library collection. Quality must be defined within the context of particular universities or even particular degree programs. In our case, we felt that the faculty in each department were in the best position to judge, since the 36 departments differ in their goals, student characteristics, teaching and research emphases, presence or absence of graduate programs, academic or applied orientation and role within the College’s general education curriculum.
  2. Some journals are far more valuable than others. The top 10% of the journals in a particular field can easily account for as many citations as the other 90% combined. Even when importance is evaluated based on criteria other than scholarly impact, the same general principle holds true. Journals not designated as wanted journals – in particular, those that would not have been selected individually on a title-by-title basis – should not influence the decision to acquire, or not acquire, a particular online resource.
  3. Scholarly impact, while nearly always important, should not be the sole criterion for selection. Subjective journal ratings are correlated with citation impact only to a moderate extent, and other factors, such as alignment with the curriculum, are especially relevant at universities dedicated primarily to undergraduate education.
  4. A comprehensive review of the journal collection should be zero-based. That is, it should start with a clean slate and attempt to identify the most important journals without any bias toward those initially held by the library. In particular, we should not assume that previous years’ selection decisions reflect the current needs of students and staff. At Manhattan College, we did not honor requests to provide the faculty in each department with lists of ‘their’ journals, and we discouraged them from seeking that information on their own.
  5. Local use statistics are of limited value in the assessment of individual journals. As discussed fully elsewhere, use statistics such as downloads, views and searches are not strongly related to most indicators of journal impact or quality, nor do they necessarily represent educationally meaningful use – the extent to which library resources are integrated into students’ academic work. A reliance on local use statistics may also introduce bias in favor of currently held resources – those for which data are available.

Each of these principles is based on research in librarianship, information science or related fields, but each is sometimes disregarded when immediate concerns such as prices, budgets and renewal deadlines are foremost in the minds of library staff. With our procedure, there is no need to consider cost during Stage 1, however, since cost-effectiveness is central to Stage 2. As discussed in the next section, cost is an attribute of the acquisition opportunity – the subscription or mechanism by which the journal is acquired – rather than an attribute of the journal itself.

Stage 2: cost-effective acquisition of full-text journal resources

Unlike Stage 1, our Stage 2 procedure is necessarily a single-criterion method: it requires the identification or construction of a single variable that represents the goal of the serials review. At Manhattan College, we chose ‘cost per wanted journal’ as our criterion. Our goal, therefore, was to identify the set of acquisition opportunities that minimized cost per wanted journal. Stage 2 can be designed to either minimize or maximize the criterion, and the use of a composite variable based on the values of several other variables is perfectly acceptable. For instance, we could have calculated ‘academic utility’ on a scale of one to ten based on citation impact, subjective assessments and other attributes, then proceeded to Stage 2 with the goal of minimizing cost per unit of academic utility. In practice, however, our calculation of cost per wanted journal was based on the assumption that each wanted journal within a particular database was of equal value.

‘Acquisition opportunities’ are central to Stage 2. Each acquisition opportunity represents the combination of a journal (a particular title) and a full-text journal resource – i.e. a single-journal subscription or a full-text database/collection/package. For instance, The Sociological Review can be acquired as a single-journal subscription, through any of three SAGE full-text databases, or through SocINDEX with Full Text or Sociology Source Ultimate. Consequently, there are six acquisition opportunities associated with the journal, with Manhattan College costs ranging from US$155 to US$1,197. (Each wanted open access journal was counted as a single acquisition opportunity with a cost of US$0.)

Stage 2 is essentially a set of five tasks, which are described more fully elsewhere:

  1. Using an overlap analysis tool such as ProQuest 360 Core, compile a list of all the full-text journal databases that are available as potential acquisitions. Establish and apply objective criteria to identify those that are worthy of serious consideration. (Ideally, we would have been able to consider every online resource available to the library. Unfortunately, an analysis of that scope cannot be undertaken in 360 Core, so it was necessary to first exclude the resources unlikely to be appropriate for Manhattan College.) In our case, databases were excluded if they met any of eight criteria – e.g. if they provided access to fewer than three wanted journals or if they were obviously intended for corporate or hospital libraries rather than academic libraries. Altogether, our eight criteria reduced the number of available databases to 284.
  2. Obtain price information for those databases and for single-title subscriptions to each of the wanted journals identified in Stage 1. These prices should reflect what the library would actually pay, accounting for consortial deals and other special arrangements.
  3. Using the overlap analysis tool, determine the number of wanted journals included in each database – the number for which complete full text is provided with an embargo of six months or less. Then, for each database, calculate cost per wanted journal – the total database cost divided by the number of wanted journals. (This assumes that the wanted journals in a particular database are all of equal value. Alternatively, the cost of each database can be divided among the wanted journals on the basis of their size, their citation impact, or some other attribute.) For single-journal subscriptions, cost per wanted journal is simply the annual subscription price, with or without a surcharge to account for the labor cost of processing each subscription individually.
  4. Identify the single most cost-effective database based on its cost per wanted journal. Add that database to the list of resources that will be acquired – but if any single-title subscriptions have a lower cost per wanted journal than the most cost-effective database, select those journals instead.
  5. Remove the journals selected in step 4 – those included in the selected database – from the list of wanted journals that must still be acquired. Then redo step 3, recalculating cost per wanted journal based on the remaining wanted journals (those not already selected).

Steps 3, 4 and 5 should be repeated until the goals of the serials review have been met. With a zero-based procedure, there is no need for explicit deselection (cancellation) decisions, since cancellations will be made whenever a previously held journal is not selected through the Stage 2 process.

Our iterative procedure ensures that the most cost-effective resources are selected first. In each successive round, the next most cost-effective resources are selected. Steps 3, 4 and 5 can be repeated until pre-established objectives have been met or until the available funds have been spent. For instance, the goal may be to acquire a fixed percentage of the wanted journals (e.g. the same percentage initially held), to acquire all the wanted journals that can be obtained for less than a specified per-journal cost, or to see how many wanted journals can be acquired without any change in total expenditure.

The Stage 2 procedure is largely mechanical, and it is not grounded in any particular collection development principles. However, it does address a major criticism of full-text databases – that they include many journals that do not meet the library’s usual selection standards. Because cost per wanted journal is based solely on the number of wanted journals in each database, our procedure ensures that ‘unwanted’ journals do not enter into the evaluation of cost-effectiveness. No credit is given for titles that do not support the needs of students and faculty, and the number of unwanted journals in a database has no bearing on the results of the procedure.

Our method does not account for database content other than full-text journals, or for characteristics such as interface design or quality of bibliographic records. () It is therefore not appropriate for the evaluation of resources that are primarily bibliographic, or for specialized databases that provide access to images, streaming media, news articles, tests, reference materials, e-books or statistical data.

Conclusion

As noted in the Introduction, our two-stage review process could have achieved dramatic cost savings for Manhattan College if that had been our goal. That is, we could have ended the iterative process after 21 rounds, acquiring 1,544 of the wanted journals (i.e. more than the 1,404 initially held) and paying just 37% of our initial cost. Because expenditure reduction was not one of our goals, however, we continued for 43 rounds, achieving a 3% reduction in total cost, a 50% increase in the percentage of wanted journals held, a 35% reduction in cost per wanted journal and a substantial reduction in inequality of holdings across academic departments. Although we were concerned that the strong emphasis on wanted journals might substantially reduce the overall number of titles held, our total serials count declined by just 5%.

By decoupling the selection of wanted journals from the identification of cost-effective acquisition mechanisms, our method allows for the evaluation of each journal solely on the basis of quality. Quality may incorporate any number of locally important attributes – reputation, scholarly impact, demonstrated instructional value or importance to particular groups within the university, for instance – but factors unrelated to quality are specifically excluded from the Stage 1 review process. In particular, our method minimizes any bias associated with publisher or publisher type. All publishers compete on an equal footing. This is appropriate, since editors – not publishers – ultimately determine journal quality through the processes of manuscript solicitation, review, revision and acceptance or rejection.

Moreover, these results reveal that substantial savings can be realized when each journal is acquired in the most cost-effective way (Stage 2). It is misleading to refer to the price of any particular journal, since price is an attribute of the acquisition opportunity – not of the journal itself. For instance, the American Political Science Review can be acquired as a single-journal subscription or through any of 12 databases, at annual prices ranging from US$105 to US$1,774. Our review method capitalizes on the distinction between journals and acquisition opportunities, demonstrating that the acquisition of journals through full-text databases is in no way inconsistent with title-by-title selection. Even when each wanted journal is selected individually, package deals usually provide the most cost-effective means of acquisition. Of the 1,232 wanted journals available to Manhattan College through both full-text databases and single-title subscriptions, 88% are less expensive when acquired through a database.