A personal (re)introduction

Consider the following scenario: in 2003 a researcher active in the field of academic publishing and open access (OA) fell into an ‘OA coma’ (defined as total inability to read, listen to, or engage in anything related to scholarly communication) that lasted for over 15 years. Finally waking up in 2018 and surveying the OA field, what would his reaction be to the changes that had taken place in the intervening years?

In two words: deep disappointment. What used to be called ‘the serials crisis’ is now the ‘unsustainable subscription model’, the oligopolistic structure in the academic publishing market is now even more concentrated, the emphasis on OA has shifted from self-archiving non-peer-reviewed preprints to depositing peer-reviewed accepted manuscripts (AMs) in institutional repositories (IRs), but the amount of research hidden behind paywalls is still substantial. At a fundamental level, little has changed, in spite of millions of working hours and millions of pounds having been spent in an effort to move academic publishing to a more rational structure.

The reason for this lack of progress is both patently obvious and surprisingly underplayed. This article has the ambitious aim of stating the substantive problem that has beset academic publishing for the last 25 years or so and suggesting a simple, cost-effective and immediate solution.

The fundamental reason why the academic journal market is intrinsically flawed

Simplifying to the extreme, consider the workflow that starts with the submission of a paper to a journal and ends with the author’s accepted manuscript (AM). The labour expended throughout this process is by academics and for academics with no monetary reward. If the AM were to be deposited in an OA institutional repository (IR), it would become a public good freely available to anyone interested in the paper’s subject matter. In our opinion, the main objective of OA would have been achieved.

The next stage of the process is the transition from AM to published article, which is undertaken by the publisher who can reasonably expect a return for the value that is added to AMs (e.g., metadata tagging, HTML, PIDs, etc.). There is much confusion in the literature and among librarians and publishers about what additional value publishers contribute after the (basic) AM has been reviewed, produced as a PDF and hosted on IRs. The critical distinction here is for whose benefit the additional value has been produced: for example, how many of the 96 things that, according to Kent Anderson, publishers do, actually relate to value to the academy as opposed to value to the publishers’ shareholders? Alternatively, we could ask how much libraries would be prepared to pay for online/print journal subscriptions if the entire content were available (in AM format) on OA IRs.

The fundamental point at the very core of the OA debate is that if these two objects – the AM and the published article – could be unbundled, most of the problems currently plaguing academic publishing would be solved: the AMs would provide OA to knowledge and the published article would be paid for by anyone interested in the additional services that it provides over and above the AM content.

The source of the persistent crisis in academic publishing lies in the fact that publishers bundle (i.e. combine) both the AM and the article into a single commodity and charge users not only for the additional value of the article (as they are entitled to do), but also for the AM content, even though they have contributed little to its production. Seen in this light, it is no exaggeration to conclude that the OA movement has achieved very little in the last 20 years or so.

Some promoters of OA point to the switch from a subscription model to article processing charges (APCs) as evidence of progress towards a wider diffusion of knowledge. This is a misconception. As long as APCs are set by publishers as an alternative way of charging for the combined AM plus article commodity, the same unsustainable economic model will persist, whereby libraries (i.e., ultimately, taxpayers) provide a large subsidy to the shareholders of commercial publishers, with the additional inefficiency due to researchers from poorly-endowed institutions being put at a disadvantage when submitting their research to APC-based journals.

The very concept of ‘article processing charges’ reflects the lack of appreciation of the substantial difference between AMs and articles (versions of record [VoRs]). With very few exceptions the labour expended throughout the peer-review process is not undertaken for financial gain, but instead either on a reciprocal gift exchange basis – for referees – or for peer esteem and recognition – for editors. Virtually the whole workflow from initial submission through to refereeing, revision and, finally, to decision is performed online with little cost other than the time expended by authors, referees and editors (for which no monetary reward is expected). It can be said that irrespective of whether journals are funded by subscription or by APCs, the effective cost of producing (basic) AMs is so insignificant that it can be neglected.

If it is accepted that the target of OA is the content of AMs and not the packaging surrounding it, it follows that the establishment of APC-funded OA journals by itself does not solve the problem of unbundling AMs and articles. It is interesting to note that if a magic wand could be waved and all subscription-funded non-OA journals could be turned overnight to APC-funded OA publications, the saving in subscriptions charges for libraries worldwide (estimated in 2008 to be £2.91bn) would be completely offset by a virtually identical increase in APC costs (£2.92bn), thereby swapping the current ‘unsustainable subscriptions crisis’ for an ‘unsustainable APC crisis’.

In the early days of OA advocacy, the emphasis was on encouraging academics to self-archive their preprints, now called authors’ original manuscripts (AOs), a solution supported most vociferously by Stevan Harnad. In spite of being ‘a good idea’, generalized self-archiving did not happen (apart from in some disciplines, notably particle physics using ArXiv). Nowadays academics are prompted to deposit their AMs in IRs and, again, in spite of this being an ‘even better idea’ – in so far as AMs are peer reviewed, unlike authors’ AOs – it seems that the (quality-adjusted) take-up is disappointing.

To state the obvious, the reason why both ‘good ideas’ have failed to become standard practice is the lack of individual incentives. The average academic whose paper has finally been accepted for publication can justifiably consider their job successfully completed: their academic reputation and esteem have been increased to the extent to which the publishing journal is regarded by their peers and the currency in which their standing is measured is the number of citations, i.e. a metric attached to the article (and not to the AM). Why should they bother to deposit their AM on an IR? What direct benefits would accrue to them? Or, more generally, what is the value of an AM once the paper is published as an article? As long as the main metric for measuring research impact is the citation count (either directly or indirectly via the higher reputation of higher impact factor journals), the added value of AMs compared to articles is likely to remain low, as the path from depositing an AM on an IR to the article gathering more citations has at least two substantial obstacles.

The first roadblock is discovery: depositing an AM provides no guarantee of discovery. Unlike journal publishers, who have a strong incentive (and commensurate resources) to increase the citation count of the articles in their journals (because the citation-driven impact factor is an important determinant of journal pricing), no equivalent systemic incentive motivates the resource-poor, overworked custodians of IRs to increase the visibility of the AMs they store.

The second obstacle is the poor read-to-citation conversion rate for AMs: if I wish to cite a piece of research and I have access to the paywalled article, I am far more likely to have discovered the article rather than the AM, whereas accessing the AM but not the article prevents me from citing the latter (other than as a generic reference).

The idea that unbundling AMs and articles offers the key to unlocking the persistent stalemate in OA is not novel and has been restated recently by Toby Green. The fundamental difference between Green’s approach and ours resides in the identity of the player(s) who can turn the key: in Green’s view, ‘only one actor is needed to start this process of unbundling: the publisher. In making a basic, legal version free for anyone to read, gratis OA is achieved at a stroke’. In our view, to expect large multinationals in an oligopolistic market to ditch the economic model that allows them to earn substantial supernormal profits is an example of unwarranted optimism.

Our conclusion is different: in the journal publishing ecosystem, the object of OA – the knowledge contained in the AM (produced, reviewed, corrected and produced by academics for no direct financial reward) – currently has no value to the author(s) when divorced from the published article, managed and owned by profit-seeking oligopolistic publishers. This is the ultimate reason why the unbundling of AMs and articles (VoRs) cannot be achieved under the current system of academic journal publishing. As soon as the problem is posed in these terms, its solution becomes apparent: for the unbundling to be feasible, AMs must have a value independent from articles. We cannot expect the publisher to be the actor who starts this process. Our approach is more subtle: our main contention is that a substantive contribution to the process of endowing AMs with independent value comes from supplementing citations as the currency of academic esteem with a parallel channel: aggregating, validating and counting online usage of AMs.

The case for and against views and downloads

Why should views/downloads be given any academic credibility? Downloading or accessing the full text of a paper because the title sounded interesting is no guarantee that it can have any meaningful impact – having looked at it I can decide it was irrelevant, outdated, wrong, etc. But even if this problem could be magically solved, an even more basic objection could be raised: only research that is valuable ought to be rewarded, not research that is popular. Here one could insert the inevitable reference to PLoS’ third most downloaded article (‘Fellatio by Fruit Bats Prolongs Copulation Time’) to drive this point home. One should not forget that such objections to measuring online usage – for example, that a download does not entail actual use, let alone impact – apply equally to citations. The limitations of online usage of AMs as useful raw material to measure non-citation impact are well known and well appreciated by librarians. What is less appreciated is that however substantial the criticisms of views and downloads as meaningful impact measures may be, the critical issue is no longer whether data on online usage ought to be collected, aggregated and disseminated, but rather who ought to be in charge of the process – the academic community or commercial publishers? We believe that views and downloads data ought to be treated as a prime example of open data (data that can be freely used, shared and built-on by anyone, anywhere, for any purpose), whereas most commercial publishers consider online access data as a private commodity. We cannot find a starker example of the difference between commercial publishers and (concerned) librarians on the treatment of data than the case of usage data reports. These are data generated by library users when they access journals their library has purchased. One might reasonably assume that such data belonged to the library concerned. Alas, such an assumption is unwarranted, as detailed, for example, in section 2.4 of the standard Elsevier journal subscription contract:

‘Elsevier will make usage data reports on the Subscriber’s usage available to the librarians/administrators employed by the Subscriber for internal use only. Such reports may be accessed by vendors or other third parties only with permission of Elsevier and for the purpose of usage analysis of the Subscriber.’

We surmise that the many librarians who subscribe to and support the concept of open data instead of accepting the above confidentiality clause would be prepared to follow the example of the University of California libraries, who insist on treating their own usage data as open and have modified section 2.4 to:

‘The Subscriber reserves the right to collect, analyze, and make results of such analysis available to both internal and external constituencies of usage data compiled by Elsevier and made available to the Subscriber.’

It should come as no surprise that commercial publishers have long since perceived the market value of online access data and have been busy acquiring companies that manage the process (e.g. Elsevier’s purchases of Atira/PURE [August 2012], bepress [August 2017], Plum Analytics [February 2017], Aries [August 2018]) or collect OA material (e.g. Elsevier’s purchase of SSRN [May 2016]).

In conclusion, online usage data are being collected with increasing vigour, not by librarians who would do so for the benefit of the academic community and the public at large, but by commercial publishers for the benefit of their shareholders. Far from being a dangerous development that should be managed and contained by librarians rather than exploited by corporations, online usage data could not only make an indirect, but extremely powerful, contribution to achieving universal OA to scientific, scholarly and medical peer-reviewed papers, but also could redirect research efforts in a way that would reduce the knowledge gap between high-income and low/middle-income countries.

We argue our case with reference to a specific discipline – emergency medicine – and a specific geographical area – Africa – but the argument can be generalized to many other disciplines and regions.

In our example, an organization (a medical charity or a research council) is interested in assessing the impact on Africa of a set of clinical research articles. Currently, it will have no choice but to resort to some citation-based metric, even though citations are extremely poor proxies for measuring impact on any geographical region. Two options are available: either the location of the author being cited or the location of the author doing the citing. The drawbacks of both options are obvious. Any article authored by a non-Africa-based academic has by definition no impact on African readership according to the first option and, according to the second option, a necessary condition for any African-based citers is the authorship of an article. Taking emergency medicine as an example, one finds that 26 African countries (covering over 200 million people) in the last five years have produced no academic articles in this field. It follows that any citation-based metric would record no impact whatsoever in any of these countries – a highly unlikely conclusion. One would expect a significant number of (non-academic) clinicians involved in emergency medicine to have read, and to have been affected by, academic articles in their field, although no trace of the resulting impact was left.

The problem here goes well beyond the failure to record the impact of articles that are read but not cited. After all, citations do not save lives, clinical practice affected by exposure to academic clinical articles does. We argue that the failure to record non-citation impact may be of little significance as far as the dissemination of existing knowledge is concerned, but it has nefarious effects on the production of new knowledge. This is a point that seems to have been neglected by supporters of OA who rightly stress the inequity produced by paywalls. When researchers in, say, Africa cannot learn from the latest developments in whichever discipline they are interested in, not only are their lives diminished, but also the international research community is deprived of the potential contributions that these researchers could have made had it not been for the knowledge apartheid enforced by paywalls. Much less emphasis is placed on the inequality indirectly generated by the lack of metrics for non-citation impact.

This latter point may merit some further explanation. Suppose you are a first-world researcher motivated by both the desire of peer recognition and esteem and the willingness to enhance the quality of life for at least some of your fellow human beings. Under the current system of academic publishing, you are forced to choose between advancing your academic standing or carrying out welfare-improving research. The reason for this invidious situation is simple: if your research has the greatest impact in countries with low publication rates, your academic reputation (as measured by citations) is not improved even if your research is read widely and changes lives for the better. Notice also how the recommended switch from subscriptions to APCs makes no difference to the scenario described above: admittedly, if your publication is now OA, it will reach a wider audience, but, as long as non-citation impact is not measured, your academic recognition will not be improved and your citation count will remain low.

This is a well-known problem, yet why have no solutions been put forward? It is a recurring theme in this paper that proper attention ought to be paid to developing new and more effective incentives. Who would benefit from, and who would be negatively impacted by, a re-balancing of academic rewards that gave more weight to non-citation impact?

As a suggestive exercise we have analysed one specific discipline (emergency medicine) for one specific region (Africa) for the period 2014 to mid-2019, by counting all articles by at least one author with an African affiliation as recorded in the Scopus/SciVal database. We have removed all articles in languages other than English and French and all journals with fewer than four qualifying articles in the period. The following two tables show the top ten rankings according to views and then according to citations (OA journals in bold; E stands for published by Elsevier).

Table 1

Top ten journals in emergency medicine ranked by citations

Scopus source titleTotal viewsTotal citationsTotal articlesMean viewsMean citationsRank by citationRank by views

Resuscitation (E)2974091519.827.313
World Journal of Emergency Surgery21175533560.515.821
Shock9068712.99.739
Annals of Emergency Medicine (E)9361713.38.748
Injury (E)112061910410.86516
Burns (E)9093948211.14.8614
Internal and Emergency Medicine572669.54.3717
Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine7622612.73.7810
Academic Emergency Medicine123361012.33.6913
International Journal of Emergency Medicine571662028.63.3102

Table 2

Top ten journals in emergency medicine ranked by views

Scopus source titleTotal viewsTotal citationsTotal articlesMean viewsMean citationsRank by viewsRank by citation

World Journal of Emergency Surgery21175533560.515.812
International Journal of Emergency Medicine571662028.63.3210
Resuscitation (E)2974091519.827.331
International Journal of Emergency Management1004616.70.7427
Prehospital and Disaster Medicine308452114.72.1518
BMC Emergency Medicine5251023813.82.7614
Journal of Emergencies, Trauma and Shock9516713.62.3717
Annals of Emergency Medicine (E)9361713.38.784
Shock9068712.99.793
Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine7622612.73.7108

If citations are replaced by online views/downloads, top-tier journals see their rankings drop precipitously: Injury, Burns, and Internal and Emergency Medicine drop from 5th, 6th and 7th to 16th, 14th and 17th, respectively. Conversely, when measuring views instead of citations, the International Journal of Emergency Medicine (an OA title, part of Springer Nature’s BioMed Central), the International Journal of Emergency Management, Prehospital and Disaster Medicine, BMC Emergency Medicine (OA) and Journal of Emergencies, Trauma and Shock (OA) are catapulted from 10th, 27th, 18th, 14th and 17th place to 2nd, 4th, 5th, 6th and 7th, respectively. Our contention here is not to supplant existing (citation-based) journal rankings with a new order, but to suggest that greater transparency regarding usage would allow libraries to make more informed purchasing decisions, researchers to identify outlets that would facilitate their work being viewed/downloaded, and readers to discover material that is relevant to their context.

Highly profitable commercial publishers are unlikely to push for more views-based impact measures. OA journals (whose articles can be viewed without the obstacle of expensive paywalls) would definitely gain from views/downloads being given more weight, but they lack the clout (and the resources) to endow online views/downloads with the academic recognition required to make authors undertake impactful research.

The conclusion reached so far is that if AMs are to be unbundled from articles, then they ought to be given independent value based on their advantage over articles in terms of wider reach. Value implies comparison, but how can AMs be compared unless online usage is measured, aggregated, validated and disseminated?

Why can online usage data not be aggregated, validated and disseminated?

The instructive answer to the above question is provided by the fate of the Publisher and Institutional Repository Usage Statistics (PIRUS) project. In a nutshell, the PIRUS project aimed at collecting all online usage data generated by UK IRs and publishers’ servers, validating them using COUNTER criteria, and making the resulting cleaned-up data available to all stakeholders. The very idea that publishers would support a mechanism that creates value for AMs (an object they should not own) and that they would be willing to release, for free, usage data (that they do own) rather than to attempt to monetize them shows the importance of assessing economic incentives when designing a project as ambitious as PIRUS.

The proximate reason given for the failure of PIRUS was that “PIRUS proposed the establishment of a global central clearing house (CCH) to deliver such a service. Unfortunately, it became clear from a survey conducted at the end of the project that the majority of publishers were not, largely for economic reasons, yet ready to implement or participate in such a service.”

The moral of this sad tale is that if online usage data are ever to be aggregated, validated and disseminated, it must be through a mechanism that firstly acknowledges the powerful disincentive of commercial publishers to support any initiative that enhances the value of AMs and, secondly, does not rely on a global central clearing house to collect the data.

A new way of aggregating online usage data: BitViews

There is a feasible low-cost solution to the technical problem that beset PIRUS: rather than having a central clearing house with which each repository interacts – in other words, a hub-and-spoke model – a blockchain can be used to distribute the work across repositories, aggregate usage from different sources and ensure conformance with COUNTER standards without needing a central body.

We have described elsewhere the basic features of such a solution, which we call BitViews. In summary, participating repositories constitute the ‘nodes’ of the network; over a fixed time period (t), all nodes send their (encrypted) raw usage data (including the DOI of material accessed, time-stamp, and requesting IP address) to a single, randomly-selected node which collates the activity into a block for time t and applies agreed-upon open source rules (e.g. COUNTER criteria) to filter out non-human activity, double-clicks, and so on. A second randomly-selected node verifies that COUNTER criteria have been applied correctly and, if so, the block is added to the chain. The process is then repeated, generating a validated, COUNTER-conformant blockchain of online usage with no central clearing house. While formal COUNTER compliance requires independent audit, BitViews instead offers an open-access ledger and transparent, ‘smart contract’-type rules for counting online usage; in this way, both product (ledger) and process (rules) are open to full public scrutiny.

An example might be that on 10 October 2018, a researcher from Sydney, Australia, wished to view article A published in the African Journal of Emergency Medicine from ScienceDirect, so the researcher’s computer would send Elsevier a request for the full text (see Figure 1).

Figure 1 

A typical online access web page

Now Elsevier would have researcher A’s IP address, the article’s DOI and the time of the request: the where, the what and the when. This proposed usage event is sent to the collating nodes, its COUNTER compliance is verified and, if compliant, it is added to the ledger as:

DOI: 0.1016/j.afjem.10000000; Location: Sydney, Australia; Timestamp: 10.10.2018.

Notice that neither the IP address nor the precise time is recorded, protecting researchers’ privacy. The workflow chart for BitViews looks as follows (see Figure 2).

Figure 2 

A typical BitViews flowchart

What BitViews produces is, essentially, a table of usage events, a public ledger that can be searched and analysed by anyone, anywhere, at no cost. By providing aggregated and validated online usage data, BitViews would furnish researchers with the raw materials to analyse not only the geographical reach of individual papers and journals but also the dynamics of such reach. For example, the number of instances where an article’s DOI appears on the blockchain is that article’s usage count. It would also be simple to establish an article’s usage by country. Similarly, searching the ledger for “10.1016/j.afjem”, in African countries, in 2017 would give the continent-level usage statistics for the African Journal of Emergency Medicine in that year. The value that BitViews adds relative to individually-collected IR data is threefold: usage statistics are calculated transparently and consistently, they are collated across platforms and they are accessible in a single OA ledger. It is easy to see that although BitViews can collect and validate online usage data irrespective of whether the item accessed is an AM or the published article, BitViews has the potential to be a game-changer as far as the value of AM is concerned.

BitViews as a game-changing nudge

If BitViews were merely an efficient and cheap new way of aggregating online access data for scholarly, scientific and medical peer-reviewed papers, it would represent just another tiny step forward towards academia reclaiming ownership of its data and using them for the benefit of all. But this aim, laudable as it may be, it is not the ultimate objective of the BitViews project. By providing a subtle ‘nudge’ to authors of peer-reviewed articles, BitViews aims to create a parallel channel for academic recognition and esteem by counting, validating and disseminating data on where, when and how often AMs specifically are viewed by readers on a worldwide basis. The argument is strikingly simple: as soon as peer recognition and esteem depend (also) on usage, it is in each researcher’s individual interest to ensure maximum visibility, which can be achieved most efficiently by depositing AMs in IRs, free from the shackles of readership-decimating paywalls.

BitViews can satisfy the demand for non-citation impact analysis, and it is easy to foresee that funding bodies, promotion committees and the academy in general will consider validated online access data as part of their assessment of research impact. Notice the virtuosity of this circle: the provision of aggregated, validated, publicly available online access data allows any institution interested in assessing research impact to make use of such data. This in turn creates a hitherto absent incentive for authors of peer-reviewed papers to ensure that they reach the widest audience of readers, which is best achieved by making AMs as widely available as possible, i.e., by depositing them on paywall-free IRs. This in turn creates more data on online access and the circle continues indefinitely. Then BitViews would have achieved its ultimate goal – to create an ecosystem that maximizes the amount of peer-reviewed OA research.

BitViews: the obstacles ahead

It would be the height of naivety to assume that a project like BitViews would not encounter formidable obstacles in its path to universal OA to scholarly, scientific and medical research. Identifying both the sources of opposition and the forces to defeat them is the key for success. The obstacles facing BitViews can be grouped in two main categories: internal and external, the former related to BitViews as a piece of technology, the latter related to BitViews as an economic and social construct.

The core technology of BitViews is, unsurprisingly, blockchain: a very secure technology finding applications across industries. Undoubtedly, there are difficulties to be worked on: integration with COUNTER, making BitViews a plug-and-play application working with the various platforms used by IRs, etc. The inevitable comparison with Bitcoin could be easily misinterpreted. Whereas under Bitcoin, anyone can check the validity of a proposed transaction, under BitViews only a selected few reputable repositories are allowed to add transactions to the ledger (under a consortium blockchain arrangement). As a result, BitViews dispenses completely with computationally intensive ‘mining’, a feature that makes Bitcoin extraordinarily wasteful in terms of computation and energy consumption. It is also worth mentioning that even though publishers and IRs see millions of accesses per year, if stored on a well-designed database, the storage requirements should be very manageable – probably around 1GB to store 10 million views. We estimate that if each ‘node’, or IR, were to store the entire BitViews blockchain locally, storage requirements would amount to less than US$50.

As far as the external obstacles to the success of BitViews are concerned, they come from two separate camps, one very obvious – commercial publishers – the other, very surprising – well-intentioned librarians.

The same reasons that made leading publishers sink the PIRUS initiative apply even more to BitViews. If successful, BitViews will turn proprietary online access data currently owned by commercial publishers into open data, freely available to anyone. By reducing the role of peer-reviewed articles (owned by oligopolistic corporations) to purveyors of citations, by stripping them of their unwarranted function as disseminators of research, and by increasing the value of peer-reviewed AMs (a public good freely available to anybody) as carriers of scientific and scholarly knowledge, BitViews could make a contribution to correcting the persistent market failure in scholarly communication and finally unbundle AMs from articles. It seems realistic to expect commercial publishers not to join BitViews with their platforms, at least initially. Would this non-participation not sink BitViews as it did for PIRUS? We think not – and for two main reasons.

First, online access data produced by commercial publishers are available. Even when they are validated through the COUNTER system, publisher-supplied viewing data have an in-built bias towards accommodating practices that artificially increase the volume of views (as shown conclusively by Bergstrom). Nevertheless, when potentially tainted publisher-produced online access data are compared with COUNTER-conformant bias-free data provided by BitViews, all sorts of adjustments can be made, exposing systematic biases, aggregating ‘clean’ BitViews data with de-biased publishers’ data, etc.

Second, we expect that in the medium term the initial refusal by commercial publishers to join the network of BitViews-compliant repositories will come under increasing pressure from both librarians and academic authors. It would be very surprising if libraries refused to follow the University of California’s good example of treating online access data to articles as open data and not the publishers’ own property, to be shielded from public scrutiny. We can foresee a healthier environment where in order to obtain validated data on online access to peer-reviewed articles, interested parties will have to rely less and less on proprietary platforms such as Scopus/SciVal. Academic authors, too, can be expected to object to publishers limiting the availability of aggregated data on the non-citation impact of their articles in the new landscape where views/downloads scores are relevant factors in assessing impact and therefore peer recognition and esteem.

Counter-intuitively and surprisingly, we regard the attitude of the international librarian community as possibly the most challenging obstacle to the success of the BitViews project. Far from criticizing the aims of BitViews or finding serious faults in the concept, the reception by librarians has been almost unanimously positive when we have presented the project at conferences and workshops as well as through personal communications. How can the generous welcome by librarians be an impediment to BitViews? The reason is rather subtle. The consensus amongst librarians is that BitViews is ‘a good idea’ and the last thing BitViews needs is to be considered ‘a good idea’. The proliferation of ‘good ideas’ is one of the main reasons why the last 20 years or so have seen slow progress in the good idea par excellence – OA. If the substantial and deep-rooted inefficiency of the current academic publishing market is to be removed, the academy has to focus on clear and specific solutions. The difference between ideas and solutions is not a matter of semantics: ideas are for debate, solutions are for implementation. Ideas can always be improved, extended and refined; solutions are binary – either they work or they do not. The very concept of cost-benefit analysis is not applicable to ideas, but is fundamental to assess solutions. Ideas can be produced locally, whereas solutions often require multi-agent, multinational co-ordination.

The arguments and evidence that we have produced so far in this article confine the BitViews concept to the category of ‘good idea’, whereas we wish to propose it to the librarian community as a ‘good-enough solution’.

BitViews is not a ‘good idea’ – it is a viable solution

We estimate that the cost of producing a turn-key software application that utilizes blockchain technology to create a public ledger of online usage data within a timescale of 18 months is £250,000. Although the cost of BitViews compared to direct costs of journal subscriptions is vanishingly small, it is still large enough to prevent any single institution from undertaking the project on its own. With no single funder available, the free-rider monster raises its ugly head, with every interested party (i.e. university and research libraries) expecting everyone else to contribute. The dispersion of potential contributors can be turned – we surmise – from an obstacle to an opportunity by a suitable combination of transparency and online technology. We suggest that BitViews be funded by its potential users (university and research libraries) via a new form of crowdfunding, which we call conditional crowdfunding.

Under conditional crowdfunding, the financial commitments undertaken by contributors are conditional in the sense that the effective amount of money to be disbursed depends on the total amount raised. Specifically, if the total amount falls short of the £250,000 target, all contributions will be returned and the project will be closed. If the total amount raised exceeds the £250,000 target, the surplus will be returned pro-rata to each contributor. It can be seen that this scheme provides a simple remedy to the problem both of pessimistic potential contributors who, expecting the project to fail, choose not to contribute at all and of over-optimistic contributors who, expecting the project to raise more than its target, reduce their own contributions. In order to introduce an element of fairness in the presence of potential contributors with vastly different economic resources at their disposal, we suggest that libraries make a (conditional) contribution equivalent to 0.05% of their annual journal subscription charges.

In order to avoid a war-of-attrition scenario (where every player waits for others to move first), the crowdfunding window will be open for a limited period of three months from February to May 2020. Permission will be requested from contributors to publicize their participation in the project (but not the amounts contributed). We expect leading libraries to support BitViews and we hope that this will encourage others to follow suit.

Needless to say, the entire project will uphold the highest standards of openness and transparency. The BitViews website will track every step in the development of the project (with the software application being open source) and all expenses will be itemized and published. As soon as BitViews becomes active, two key performance indicators will be used to track the progress of the BitViews: 1) the number of institutions using the BitViews application and 2) the number of unique documents and unique usage events recorded on the distributed ledger. These metrics will be tracked and displayed on the BitViews website, along with specific targets for growth at six- and 12-months post-deployment.

We believe that this crowdfunding exercise will be beneficial to the OA movement, irrespective of whether the funding target is attained or not. If successful, the BitViews template could be used for other similar initiatives that are currently beset by co-ordination problems. But even if the crowdfunding attempt were to fail, it could nevertheless stimulate a long overdue debate on reforming academic publishing by taking active steps and not be setting up yet another committee/commission/research unit. Libraries who failed to support the project would have to justify their stance: which aspects of BitViews did they object to or thought would not work? which other projects did they believe offered better cost-effectiveness than BitViews and why?

Tentative conclusion

We wish to conclude on a positive note. We are confident that most librarians and unbiased policymakers would agree that unbundling AMs and published articles does provide the basis for sustainable OA to all scholarly, scientific and medical peer-reviewed research. The crux of the issue is how to achieve this. We discard the suggestion that a prime mover for change would come from the (commercial) publishing industry. Our analysis suggests if AMs were given value independent from published articles, then the beneficiaries of this newly created value would have a strong incentive to buy into the system. The direct beneficiaries are academic authors themselves who, under the mechanism we have described in this paper, would add a parallel channel of peer recognition and esteem based on the number, location and dynamics of online usage of AMs. BitViews simply provides the technology for aggregating, validating and disseminating online usage data. Instead of relying on Christmas-voting turkeys/publishers as the main actors who set in motion AM/article unbundling, the BitViews project is predicated on the assumptions that librarians worldwide are willing to take concrete steps to initiate the unbundling process. This is not to absolve academic authors of their responsibility (complicity?) in the slow progress of OA. The main reasons for targeting libraries as agents of change are that, compared to academics, they are counted in (a few) thousands rather than in hundreds of thousands and that they are far more well-disposed to reform the academic journal publishing system than citation-focused authors. The BitViews project is predicated on the goodwill of libraries worldwide, and the use of conditional crowdfunding is meant to alleviate the worst features of the free-rider problem by providing a simple mechanism to spread fairly the (relatively) small set-up cost of BitViews.