Introduction

The Altmetrics Manifesto states that ‘No one can read everything. We rely on filters to make sense of the scholarly literature, but the narrow, traditional filters are being swamped … the growth of new, online scholarly tools allows us to make new filters; these altmetrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem. We call for more tools and research based on altmetrics’.

Traditional filters for scholarly literature have focused on peer review and citation-based measures. But these do not reflect the range of formats and channels by which the outputs of scholarly research are now disseminated and absorbed. The journal article, long the standard format for the publication of research outputs, is now supplemented by data sets, blogging and ‘nanopublication’. The channels used by authors to reach their readership have also expanded. Scholars have moved their publications onto the web, and the ongoing conversation around the outputs of research increasingly takes place through social media. Beyond the research community itself, scholarly information has an impact on other professionals, as well as on the general public. Traditional measures do not reflect these wider impacts, which are increasingly important as the public and private funders of research seek to demonstrate the contribution that the research they fund makes to society as a whole.

How can we monitor these broader impacts? The challenge, of course, is to determine what to measure and how to measure it. There are many possible activities that fall within the scope of altmetrics. These include: links, tweets, bookmarks, and Facebook and blog posts. What is the significance of these activities? Can we talk of a ‘tweeted half-life?’ Do 20 blog posts equal one Mendeley download? Already there are a number of initiatives, such as ImpactStory and Macmillan's Altmetric, which provide researchers with a consolidated overview of the impact of their work, beyond just citations.

“No one can read everything.”

The raison d'être of any scholarly author is to share their findings and ideas, and by doing so to have an impact on colleagues, and on their field of research. In the traditional world of journal publishing, citation-based measures, notably the Impact Factor, have provided simple, global indicators to authors who, for the last 30 years, have largely adopted a simple strategy: to publish their work in high Impact Factor journals. In a more multimetric world, the development of strategies to maximize measured impact is not so straightforward. Another recent initiative, Kudos, aims to provide researchers with a service that will help them increase the impact of their published research in the web-based publishing world by maximizing readership for it.

Altmetrics have a number of advantages over traditional citation impact measures. They cover all fields of scholarship and all types of publication. They take into account social media. They provide a more immediate measure of impact than citations. They go beyond mere counting and emphasize sematic content such as usernames and time stamps. There are, however, valid questions about what the altmetrics numbers mean: are they are strongly linked to research quality? Do altmetrics measure impact or merely ‘buzz’? There is also a perception that they will be easier to manipulate and less transparent than citation data. Nor are there, as yet, any widely accepted standards or benchmarks for altmetrics.

“Do altmetrics measure impact or merely ‘buzz’?”

Standards for altmetrics

The importance of developing standards for altmetrics has been recognized by the National Information Standards Organization (NISO), which has set up a new project, the aim of which is to standardize the collection and use of alternative metrics measuring research impact. The project will identify areas where altmetrics standards or recommended practices are needed and will follow this with new standards and/or recommended practices. The project is funded by the Alfred P Sloan Foundation.

Todd Carpenter, Executive Director of NISO, has stated that “for altmetrics to move out of its current pilot and proof-of-concept phase, the community must begin coalescing around a suite of commonly understood definitions, calculations and data sharing practices … We must agree on what gets measured, what the criteria are for assessing the quality of the measures, at what granularity these metrics are compiled and analyzed, how long a period the altmetrics should cover, the role of social media in altmetrics, the technical infrastructure necessary to exchange this data and which new altmetrics will prove most valuable. The creation of altmetrics standards and best practices will facilitate the community trust in altmetrics, which will be a requirement for any broad-based acceptance and will ensure that these altmetrics can be accurately compared and exchanged across publishers and platforms.“

Phase 1 of this project, which will be completed in 2014, will identify the specific areas where NISO should develop standards or recommended practices. These areas will then be progressed by a working group convened in Phase 2. It is envisaged that the project will take two years to complete.

COUNTER, new usage-based standards and altmetrics

The mission of Counting Online Usage of Networked Electronic Resources (COUNTER) is to set and monitor global standards for the measurement of online usage of content and does so for journals, books and multimedia content. Usage is an important measure of the impact and value of publications, and as such has a role in altmetrics. Usage can be reported at the individual item and individual researcher level and aggregated to the journal or institution level. Usage is more ‘immediate’ than citations and also potentially covers all categories of online publication. Furthermore, COUNTER-based usage measures are based on statistics that are independently audited and generally trusted. Two COUNTER initiatives, now close to implementation, are particularly important in this contact. They are Publisher and Institutional Repository Usage Statistics (PIRUS) and the Usage Factor.

“The creation of altmetrics standards and best practices will facilitate the community trust in altmetrics …”

As well as being diverse, altmetrics are flexible. They enable measurement of impact in a range of ways and at a variety of levels beyond the journal. Altmetrics provide impact measures for the individual article, the individual researcher and for the research institution. The new COUNTER-based metrics fit well into this scheme.

PIRUS

The PIRUS Code of Practice has been established to provide a COUNTER-compliant standard for the recording, consolidation and reporting of usage at the individual article level of journal articles hosted by publishers, aggregators, institutional repositories and subject repositories. The definitive Release 1 of the PIRUS Code of Practice has been approved by the COUNTER Executive Committee, following a period of extensive consultation, and will be published on the COUNTER website during the final quarter of 2013.

“… credible, consistent and compatible …”

PIRUS builds on the COUNTER Code of Practice and COUNTER will be responsible for its development, ongoing management and implementation. To have their usage statistics and reports approved, PIRUS-compliant vendors and services will have to provide usage statistics that conform to this Code of Practice. Vendors that are already COUNTER-compliant should find it relatively straightforward to conform to the PIRUS standard. In addition to what is already provided for COUNTER, the key additional metadata requirements for PIRUS compliance will be:

  • the article-level digital object identifier (DOI), which allows usage to be recorded, reported and consolidated or each journal article
  • The ORCID identifier, which allows articles to be unambiguously allocated to a particular author.

The PIRUS Code of Practice provides a standard that enables any organization hosting journal articles to report in a credible, consistent and compatible way the usage of these articles to authors, their institutions and their funding organizations. It also enables vendors to consolidate usage of articles on different platforms into a global usage total.

While Release 1 focuses on journal articles, its principles may be applied to other categories of individual content items that are well defined and have sufficiently robust metadata associated with them.

The PIRUS Code of Practice provides the specifications and tools that will allow COUNTER- compliant publishers, repositories and other organizations to record and report usage statistics at the individual article level that are credible, compatible and consistent. COUNTER-compliant publishers may build on the existing COUNTER tools to do so, while an alternative approach is provided for non-COUNTER-compliant repositories, tailored to their systems and capabilities. This Code of Practice contains the following features:

  • a list of definitions and other terms that are relevant to recording and reporting usage of individual items
  • a methodology for the recording and reporting of usage at the individual article level, including specifications for the metadata to be recorded, the content types, and the versions whose usage may be counted.
  • specifications for the PIRUS Article Reports
  • data processing rules to ensure that the usage data reported are credible, consistent and compatible
  • specifications for the independent auditing of the PIRUS reports
  • a description of the role of:
    1. a Central Clearing House (CCH) in the calculation and consolidation of PIRUS usage data for articles
    2. other Clearing Houses in relation to the CCH.

Unlike the standard COUNTER usage reports, which vendors must update monthly for all products covered, the PIRUS usage reports do not have to be provided monthly for every article they cover (but should be broken down by month when reported). Rather, the vendor must have the capability to produce the PIRUS reports for all the journal articles they host on an annual basis, as a minimum requirement.

While COUNTER-compliant publishers will be eligible to apply for PIRUS- compliant status as soon as the definitive version of Release 1 is published, allowing them to report usage of their own articles on their own platforms, an infrastructure will be required to facilitate the consolidation of individual article usage data from different platforms, aggregators and repositories in order to create global article usage statistics that can then be shared with authors, or used as the basis for calculating Usage Factors. The first element of that infrastructure is already in place.

“IRUS now has over 40 participating repositories and looks like a very promising model …”

One of the greatest challenges in creating global article usage statistics is capturing usage that takes place in the growing number of institutional repositories worldwide. Unlike publishers, repositories have had no COUNTER standard for the recording and reporting of usage; nor do most of them have the resources to comply with such a standard themselves. Yet, a growing proportion of online usage takes place in these repositories and any measure that pretends to a global view must take them into account. Institutional Repository Usage Statistics-UK (IRUS-UK) has been operational since 2012 and provides a national facility that collects, consolidates and processes raw usage data from participating UK institutional repositories to create PIRUS-compliant individual item usage data.

IRUS now has over 40 participating repositories and looks like a very promising model, not only for the consolidation of usage statistics by repositories outside the UK, but also for the PIRUS Central Clearing House itself.

Usage Factor

A precondition for the successful implementation of the Usage Factor is a format and process for the efficient, automated collection of usage data, as well as the relevant metadata, at the individual article level. This process has been developed, tested and implemented for PIRUS. In principle, therefore, PIRUS-compliant publishers will be able to calculate and report Usage Factors for their own journals.

“IRUS now has over 40 participating repositories and looks like a very promising model …”

The final stage of the Usage Factor project is nearing completion. It has focused on the following issues:

  • to calculate Journal Usage Factors (JUFs) for a wider range of subject areas, and at a more granular level, than had been done in earlier stages of the project
  • to test the stability of JUFs over time
  • to compare 12- and 24-month JUFs within subject areas
  • to test whether a base threshold should be set for JUFs, below which they should not be reported
  • to select an appropriate subject classification scheme that covers all scholarly disciplines and has no geographical bias.

Usage data for 224 journals in 27 different subject fields was collected from 11 publishers and two aggregators, and the 12- and 24-month Journal Usage Factors calculated for them using the following formulae (with the years 2009 and 2010 as examples):

Some useful observations can be made about the JUFs calculated:

  1. The group of journals included in this study is a very small subset of the journal universe and the selection of journals within a given subject field is small and not necessarily representative of the field as a whole. This should be borne in mind when looking at the JUF numbers.
  2. Within each subject area the spread of JUFs for both 12- and 24-month periods appears to be sufficiently large to allow journals to be differentiated.
  3. In the great majority of cases, the 12- and 24-month JUF rankings place the top five journals in each subject area in pretty well the same order. The exceptions to this are some fields in the social sciences, where the rankings change more dramatically. As indicated in the report, the time decay of usage in social sciences is slower than in the physical and life sciences, so a smaller proportion of lifetime usage is captured in the first 12 months of an article's life.
  4. The median value JUF-based journal rankings appear to be reasonably stable from year to year.

JUFs fluctuate from year to year, but the significant spread of JUFs for journals within a given field means that these fluctuations do not appear to affect the relative positions of journals within the rankings, which are rather stable. Nonetheless, these fluctuations are a source of concern and a further study is under way to compare them with fluctuations in the Impact Factor over the same period.

There is no evidence to justify excluding low JUF journals, or journals with low numbers of articles from JUF lists. The JUFs for these journals are not less stable than for high JUF journals.

Of the subject schemes reviewed, Ringgold Subjects best meets the need of the Usage Factor. It covers all fields of scholarship to an appropriate level of granularity. It is international, and is organized and maintained by a respected independent organization.

“… as Mae West said, “It is better to be looked over than overlooked.”

Next steps

The draft COUNTER Code of Practice for Usage Factors was published for comment in April 2012. Based on the feedback received and on the results of the final stage of the project, this draft will be refined and published in final form for implementation, following approval of the COUNTER Executive.

Usage-based metrics have an important role to play in the altmetrics mix. They provide a unique perspective on the impact of research. They are available for all categories of online publication. They are based on well-established standards. But usage statistics do not tell us everything about usage, let alone impact, and they should be supplemented by other metrics, as well as surveys, case studies, etc. Usage statistics are, nonetheless, an important first step. They tell us that a content item has been looked over. And as Mae West said, “It is better to be looked over than overlooked.”