Introduction

Research evaluation is increasingly a central topic of debates and reports in and around academia. The process of evaluating researchers mainly on the basis of publication and citation metrics has come under fierce scrutiny because it is believed to be one of the main drivers of the documented adverse effects of the ‘publish or perish’ pressure on academic careers. These adverse effects include:

  • over-emphasis on the perceived novelty of research results and an underemphasis on the robustness and rigour of methods used, both in publications and in discussions, leading to favouring methodological pathways at higher risk of false positive outcomes
  • individual behaviours seeking to maximize advantage in terms of personal evaluation metrics, ranging from distorting authorship credits, slicing research outcomes rather than focusing on the bigger picture of relevance for the research field, to even more ethically compromised practices such as data manipulation
  • driving funds into publishing models that benefit an oligopoly of publishers even in the transition towards open access.

Reform of research evaluation, to a system more focused on diverse aspects of scientists’ work besides publishing alone, would have many benefits. It may help to relieve pressures on researchers’ mental health and encourage better scientific practices that put the emphasis on collaboration, like data sharing and other open science practices.

Where do early career researchers (ECRs) stand on this issue? Many of us are strong supporters of open science, but there is always a gap between consciousness and action. Whether we like it or not, with a few exceptions, we all know that metrics will not disappear overnight, but will still be on the table at the time we will be (and are) evaluated. The game may be rigged, but we all are forced to play.

Here, we share reflections from the perspective of a PhD candidate, a young academic and PhD supervisor and an ECR who recently switched from academic research to science advocacy.

The PhD candidate: ‘locked-in’ to the current system if I want to progress

A striking effect of the systems of evaluation based on publication and citation metrics is on the expectations of evaluation panels towards early career researchers when they are applying for postdoctoral positions or grants. We are still evaluated on the basis of our publications – not so much on the intrinsic quality of our work as on the quartile of the Journal Citation Report (JCR) to which the journal where we publish belongs. Choosing to send articles to innovative field-related open access journals or platforms with interesting features such as open peer review and reasonable article processing charges (APCs) is something I would like to do more to actively support a change of culture. However, my strong perception is that there would be a substantial cost to these choices in future contexts where I am being evaluated; for example, when applying for postdoctoral funding. The system forces us all to play the game to a certain extent – it favours the conventional choice of publishing in the most prestigious and cited journals in our subject area, which tend either to have a paywall or to offer open access publication only on payment of relatively high APCs, or via expensive transformative agreements. In a way, early career researchers are ‘locked-in’ when it comes to selecting venues for publication, as we are evaluated not so much on the content of our work (and the effectiveness of the associated peer-review process) as where we publish it. Embracing innovative open access journals/platforms and the many advantages they offer in terms of quality control and transparency of the scientific debate (pre-registration, open peer review, post-publication reviews), comes at a cost: that of facing a possible backlash from peer evaluation when applying for funding and new positions.

The young academic and PhD supervisor: no choice but to perpetuate the current system

I have been supportive of open access since early in my PhD. Although I have tried to publish openly whenever possible, I have consciously – and with the advice of my mentors – chosen journals that are perceived to be most prestigious, in order to further my career. This strategy has been successful: thanks to ample third-party funding that can be used for APCs, the majority of my published output is openly available, but at the same time, I have been able to win prestigious grants and find a tenure-track position at a fairly young age. However, although in this sense I have ‘won the academic lottery’ and my position will soon be secure, a new challenge has started to emerge – I am no longer only responsible just for my own career. I have PhD trainees and postdoctoral researchers under my supervision, whose career prospects I feel responsible to safeguard. Should I submit the key findings of a PhD project to a less prestigious but open access journal, or should I go for the top-tier ‘closed’ options to give them the best chances of succeeding, as I have succeeded? For me, the choice is unfortunate but clear: I need to make sure that the people I supervise have the best chances for career success. This dilemma shows starkly the systemic nature of the problem and also highlights the vulnerability of early career researchers – no researcher should have to martyr themselves to advance openness, given how valuable it is for science. Instead, what is urgently needed is a systematic overhaul of the entire reward and evaluation system to value research on its own merits instead of where it is published.

The researcher turned science advocate: change the system from within

Anyone who looks up my publication record may think ‘This is what is wrong with our current system’. The broader research community I was brought up in taught me early that publishing a lot and in high-impact factor journals was the only way to get ahead, and I was very proud that I managed to get my name into Science and Nature-branded journals and similar prestigious venues. Although I was part of research projects adept at getting articles accepted into high-impact journals, these same teams were also where I had the most vibrant discussions around how research culture needs to change. There was a broad awareness that the current system is not working well. However, many of us (including myself) felt that the best way to change it was to play by the rules until you became ‘established enough’ and then leverage that to help change the system from within. But the core question was, when is ‘enough’? An often unspoken question was also, ‘If the current system actually helps me succeed, will my interest to change it wane over time?’ In parallel, we had lively discussions in our laboratory around how to improve day-to-day research processes, ranging from introducing GoPro cameras in our laboratory and (perhaps quixotic) quests to help improve bio-nano research, create bridges with cancer nanomedicine and the development of a ‘minimum information standard’. This passion for improving the research process is what grew into a desire to improve the broader research culture, which is now a core part of my advocacy work for CESAER, including the modernization of research careers. In my current advocacy work with non-academics, no one really cares about my Science or Nature articles, so this new perspective has reinforced for me that the obsession (which I also perpetuated) is largely a focus of the research community. But this ‘containment’ also gives more power to researchers to actually change the system.

How to move forward

Change is complex and will require the involvement of all levels and actors in academia. It is clear that those who evaluate researchers – most of whom climbed the publish or perish ladder successfully – may find it hard to reconsider the system in which they themselves excelled. It is therefore crucial to raise awareness of the fact that, while the system might indeed reward excellence (although this has been questioned), it comes at the cost of adverse effects. Striving for a more balanced system of rewards, one that supports collaborative practices (for example in terms of data sharing), transparency (for example in terms of peer review) and effective value and impact (through article content analysis and article-level rather than journal-level metrics) will reward actual as opposed to perceived excellence, without the above-described documented adverse effects.

Dialogue is also needed to resolve how we can shift from an evaluation system that relies on metrics and which allows researchers to be easily classified – towards an evaluation system that goes back to the roots of what defines the quality of the scientists’ work: first, its content and its dissemination, at best, second. This change of culture appears to be in opposition to the broader trend of ranking (and related conditional financing) of research and higher education establishments to which the evaluation of researchers directly contributes, and which also needs a fundamental rethink. Reform thus entails a more systemic consideration of the issue of research evaluation within the broader topic of how research and higher education are financed and supported by public authorities, especially related to ensuring sustainable funding levels.

A vital aspect of the debate, in our eyes, is that the evaluation system as it exists today does not empower researchers towards excellence. Instead, the present system, combined with decreasing public budgets allocated to research and thus ever-increasing competition for research grants, largely functions as a convenient ‘controlling and sorting’ tool. It fails to support ECRs in setting realistic goals that make them grow and evolve in their career. Where is the place in today’s system for enhancing the reliability and impact of research, focusing for example on the robustness of datasets, creation of outreach material for policymakers, exploration of new avenues with no certainty of results, or constructive criticism of other scientists’ approaches through peer review? Research is so much more than publishing articles. We should strive for an evaluation system that empowers researchers to act in all the – currently hidden – aspects of what constitutes the research ecosystem and contribute to its vitality and connections with society.

A positive note in this debate is that, unlike debates about the future of academic publishing, evaluation is much more in the hands of the academic community and its funders, with little commercial interest in maintaining the status quo. Of course, some governments fund universities based on rankings and metrics. However, we would claim that this is largely with the tacit approval (or even explicit encouragement) of the research community. Thus, provided that awareness of the problems is shared, and common solutions are agreed upon, we have the power to effect change without having to convince a broader ecosystem of external actors who might potentially have conflicting interests.

Which steps do we identify to implement this change of culture?

First, as a research community, among researchers across all career stages, we need to take a hard, realistic and honest look at the current reward system and its flaws, regardless of how well it may have served us.

Second, beyond localized examples of evolving practices of research evaluation, for example in the recruitment practices of some faculties or research institutes, a broader internal dialogue is needed within the research community (including research funding organizations: see the discussions held at the level of the Global Research Council in this regard) to focus on what is important, what should be rewarded and how individuals are evaluated at different stages of their research careers.

We believe that the core motivation of all of this should be to empower ECRs, as we are the actors whose futures are at stake, and as a community we feel a passionate need to improve research culture. Momentum is building, and to unleash it we should strive to empower ECRs through changes in research evaluation as perhaps the most important lever for improving research quality and culture.