Background and context

Taylor & Francis Group is an international knowledge service provider with a portfolio of journals and books that span the disciplinary spectrum from anthropology to zoology. We publish journals on behalf of hundreds of learned societies and professional member organizations, work with thousands of expert academic editors and board members and support researchers to share the outputs of their work.

As a publisher, our role in scholarly communications includes both responding to the evolving priorities of our partners and customers, as well as helping to take a lead in areas such as publishing standards and ethics. This includes discussion around the use, and misuse, of metrics in research and researcher assessment, including criticism of how – rather than supporting the assessment process – metrics have replaced a qualitative review of research outcomes. A recurring concern raised by numerous commentators, from institutions and think-tanks to researchers themselves, has been about the misapplication of metrics. The most commonly levelled charge is against the impact factor – with much discussion on how this journal-level metric, originally used to inform library purchasing decisions, is incorrectly used to evaluate researcher performance and individual research outputs.

There has been concerted action to address this in recent years, with many institutions, funders and publishers cautioning against misuse of the impact factor to assess researchers or individual research outputs. Calls have also been made for different approaches, including more qualitative processes to underpin researcher assessment using tools such as the narrative CV. The Global Research Council’s 2021 report on responsible research assessment advocates for researchers to be rewarded based on their contribution to the community and not just their publication record, for example peer reviewing or public engagement efforts.

Contributions to this debate have come from a range of sources and experts, for example the Leiden Manifesto for Research Metrics and the Hong Kong Principles. Arguably, the best known of these initiatives is the Declaration on Research Assessment (DORA). DORA came into being as an outcome of the 2012 American Society for Cell Biology (ASCB) meeting. At its heart, DORA aims to address this misapplication of the impact factor with its general recommendation, ‘Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.’ The Declaration outlines a number of other best practices for funding agencies, institutions, publishers, metrics providers and researchers. Supporters are invited to sign up to DORA on the understanding that this means they will follow the recommended practices outlined within the Declaration.

Taylor & Francis signed up to DORA in March 2021 after almost two years of discussion, consultation and development. We could have signed sooner but took a deliberate decision to wait until we had met two key milestones. Firstly, we wanted to consult with our learned society partners, academic editors and colleagues across the globe and consider their feedback as part of a consensus-driven approach. Secondly, we needed to be sure that we could adhere to the practices outlined in the Declaration and wanted to link our signature with concrete developments to ensure that we were acting on the intentions expressed in DORA.

There has been much discussion and advocacy around DORA. Not as much has been written about implementation and the implications of signing up to the Declaration, but there is a growing body of case studies and insights. UiT The Arctic Institute of Norway has provided some reflections on their experience, noting that implementing DORA is easier for an institution than implementing open access policies but that ensuring full compliance in practice is challenging. The DORA site also hosts some insightful case studies, with more being added as experience is gained. We write from the perspective of a publisher and not an academic institution; implementation has much broader practical implications for institutions. Nonetheless, we hope this case study provides some guidance that can be used by organizations considering signing up to and implementing DORA.

Discussion

The approach taken by the team at Taylor & Francis was to consider DORA signature and implementation as a project comprising various activities and stakeholders and based on a simple project plan. We loosely followed the software development life cycle, though this process applied not just to technical developments but to the whole project. The process typically comprises sequential steps covering the following areas: discovery (or ideation), development, launch and review. The key elements of each stage are outlined below. Supporting this activity internally was a large cast of colleagues drawn from our Editorial, Marketing, Research and Analytics, and Technology and Product teams. The composition of the working group changed as we moved from one stage of the project to the next.

Discovery

Analysis and action plan

Although we had DORA signature in mind from the outset, we devoted some time to investigating the other initiatives. Our research focused on research assessment reform, comparing and evaluating them based on a loose set of criteria (including ease of implementation, research community buy in and community awareness). This research affirmed our original plan of working towards DORA signature because DORA 1) contains a set of practices that could be used as a checklist, 2) has a clear process for signing and 3) already has a critical mass of supporters from across the research community.

As well as reviewing the best practices that were applicable to publishers, we considered the broader set of practices. Our analysis sought to clarify those areas where we were already working to the stated outcome and those where we would need to develop or change policy. We were already working to best practice in many of the areas. This included the provision of article-level metrics, such as usage and Altmetric attention scores and opening up citations through our support of the Initiative for Open Citations (I4OC). We had the most work to do on recommendation number 6. This requires publishers to either cease promoting the impact factor or to present it ‘in the context of a variety of journal-based metrics … that provide a richer view of journal performance’. To meet this requirement, we drew up a high-level action plan that included what change or development was required, possible issues, who needed to buy in to or sign off on this work and approximately what level of effort and investment it would require to implement.

Consultation and socialization

We used the outcomes of our analysis to carry out an internal consultation, sourcing views from colleagues across the business and from different global regions. From this, we obtained a range of views and feedback. Common themes included:

  • while change is needed to how the impact factor is used, it still has value in certain contexts
  • DORA signature should be supported with clear actions to avoid ‘virtue signalling’
  • we needed to acknowledge that our editors and learned society partners aren’t obligated to take the same stance as us.

We expected to hear more concerns from colleagues in regions where impact factor is still used in research assessment. In anticipation of this, we noted that our intention was to downweight, but not remove, the impact factor and to present it in context alongside other metrics. This met with broad agreement.

On a parallel track, we engaged in invaluable discussions with the DORA team as well as seeking views from our external partners – academic editors and boards – and society and member organizations to whom we provide publishing services. We received general support for the principles of DORA, although there was some hesitation when we outlined the likely consequences of signature. This mainly centred on giving less prominence to the impact factor and fears that might negatively influence authors’ submission decisions. Additional concerns included whether such a development would add to the complexity of the editor’s role or their workload. Understanding these hesitations helped inform our implementation process, particularly in the guidance we created for our partners and authors.

Scoping and requirement gathering

Based on the feedback above, the working group felt confident that we could sign the Declaration as Taylor & Francis and could provide information to partners to allow them to consider whether they wished to support the Declaration as well. We wanted to make sure that our signature was supported by concrete action and to help colleagues and partners understand the practical implications of signature. From this outcome came the decision to review how we presented journal-level information on our platform, Taylor & Francis Online. Previously, the journal impact factor had been prominently displayed on website pages. The feedback we gathered made clear the importance of reducing the emphasis placed on this metric on our journal homepages and that, where present, it needed to be supported by a broader range of metrics to contextualize overall journal performance.

From the outset, we acknowledged that addressing issues around the misuse of metrics by providing more journal-level metrics might not solve the issue. With this in mind, we noted the importance of developing external communications and internal training materials which emphasized that research outputs should always be assessed on their own merits. We debated when would be an appropriate time to sign up to the Declaration and agreed to wait until we had implemented the changes on our platform to de-emphasize the impact factor and introduce the additional journal-level data.

We devoted a good deal of time to considering what additional metrics to include and decided upon these based on inputs from researchers, including the questions most asked of our customer help desks, discussions with the Society Publishers’ Coalition and a review of what metrics were commonly reported on other publisher sites. At the time, a pilot was running on a group of our Medical and Health Sciences journals around the display of more metrics of journal performance, such as turnaround times; this was also highly influential. More time than expected was devoted to debating whether we should display mean or median data for speed metrics. We decided to display median data, which lessens the impact of outliers and means that times are more representative of a typical manuscript journey, but this topic is still under debate. One challenge we encountered was that, particularly in terms of researcher feedback, the impact factor was considered as a ‘must-have’ metric. The group agreed that in order to encourage cultural change and a shift away from reliance on impact factor, we needed to do our part to present it in context alongside other metrics that provided more detail on journal performance.

Development

Once agreed on the metrics scope, we had to face two main challenges: how to source them and where and how to display them on the platform so that users could easily find and interpret them.

With regards to the latter, we conducted specific user interviews. The users were researchers with experience in publishing journal articles. Each user was asked to walk through several journeys, giving feedback throughout and to compare different placements for the new journal metrics on the journal landing page. Most of the users interviewed expected to find the link to the journal metrics under the ‘About this journal’ menu. They all gave positive feedback about having a dedicated page for the metrics, and they particularly liked the use of icons.

The outcome was the creation of the new ‘Journal metrics’ page, linked to from the ‘About this journal’ menu and presenting the metrics in three main groups: Usage, Citation metrics and Speed/acceptance (see Figure 1).

Figure 1 

Journal metrics page

The sourcing of the data to feed the metrics page was more complex because the information for the fields did not reside in a single source that would easily feed the web page. After some technical discussions, it was agreed that the data would be gathered and uploaded via an API (application programming interface) to feed into the platform for display via a series of custom properties.

Launch

The project group had been interacting with colleagues and external partners for a prolonged period by this point, so we felt that there was a high level of awareness around both the planned enhancement to the platform and our plans to sign up to DORA. Once we were confident that we were technically ready to launch, we focused much of our energy on briefing colleagues about the forthcoming release, providing them with information to share with external partners and support in the form of guidance and resources. These resources included information on our Author Services site for researchers in particular but also signposted best practice guidance for colleagues to refer to – most importantly, ensuring that we didn’t promote the impact factor in isolation. Guidance also included an internal standard operating procedure for those journals where teams wished to opt out of the display of turnaround and acceptance rates.

A working group of Marketing and Communications colleagues representing a range of regions and subject areas mapped where current practices needed to be changed to ensure we aligned with DORA recommendations. For example, our regular author surveys consistently demonstrated the importance of the impact factor in researchers’ submission decisions and so it was previously common for this information to be featured prominently in promotions. The result of this group’s work was a global briefing and an internal guide on best practice use of journal metrics. The guide sets out a range of protocols for marketing colleagues, including ensuring metrics are always used in context, avoiding the use of single journal metrics in isolation and highlighting qualitative indicators of quality and impact alongside quantitative measures.

Once we had gone live with the platform update and made a few other tweaks to ensure that we were meeting the intention of DORA’s practices, we announced our signature of the Declaration in March 2021. We attracted some criticism for our perceived slowness in signing up compared to other publishers, for applying the same journal-level metrics across all disciplines and in some quarters for our choice of journal-level metrics. However, we also received positive feedback with the platform development signposted as an example of us making change to support the Declaration in deed as well as word.

Post launch – lessons learned and future plans

As noted above, we were pleased by the positive response from stakeholders to our support of DORA, and we have received very few queries around, for example, impact factor promotion.

We have already updated the metrics display to include an additional metric (CiteScore Best Quartile), based on feedback from users of the platform and our academic editors. In the future we hope to develop our data sources so that management and update of metrics is automated. We would also be keen to work with other stakeholders to develop a common standard to aid comparison between journals, regardless of publisher.

We still have work to do, for example in helping prospective authors to understand and interpret the metrics we present, considering how to display some metrics in context and improving our data flows. Broader cultural change will take longer to have an effect. Although there is a trend towards de-emphasizing the impact factor, there still are parts of the world that reward researchers for publishing in journals with an impact factor. We hope that by signing up to the Declaration, and with the changes that we have made, we are playing our part in effecting change and helping to encourage our networks to review and consider their approaches to research assessment.

Conclusion

We have outlined the process that a large academic publisher underwent to sign up to and implement the Declaration of Research Assessment, including activity to downplay the impact factor on its platform. Although there was broad support for the spirit of DORA and its call to reform research assessment practices, we had to overcome some friction when implementing the changes needed to support our signature, particularly those that required a change in behaviour. We recommend that organizations considering signing DORA should approach it with an implementation mindset, considering the practical implications of signing the Declaration (including any targeted investment in technological infrastructure), how these might affect key stakeholders and how to win support for proposed changes.