Publisher's Note

A correction article relating to this publication can be found here: https://doi.org/10.1629/uksg.449

Mandate fever

Early 2018 saw a more than usually intense conference season in the UK notable in part for the – dare I say it – historic restatement by Steven Hill of Research England (and formerly HEFCE) of Research England’s intention to mandate open access (OA) monographs in the 2027 Research Excellence Framework (REF) that is to commence in 2021. His presentation took place at the highly visible setting of the second University Press Redux conference in London. This was further clarified in a blog post indicating in his view that there could be many routes to OA monographs, the precise elements of which he was open-minded about – or, as he signalled at the event itself, ‘kind of agnostic’.

Some of the largely UK-centric reaction to follow was both swift and reported alarm. Horne’s publishing industry blog for BookBrunch likened it to ‘a man tossing a hand-grenade into a room, along with a note reminding its occupants that he had told them he was going to do so and then offering a few suggestions as to how they might deal with it’. Others, according to tweets by Horne and from audience members, interpreted Hill’s presentation as an attack on the role of academic publishers’ commissioning function and the monographs industry itself, especially the commercial sector. This unusually robust reaction referencing diverse interpretations of the greater good should in no way disguise the fact that this debate concerns the routing of funding for monographs and jobs in academic publishing. Likewise, major changes impact on the academic labour market where publications often act as proxy indicators of employability as many, including Eve and Miller, have noted. For some authors, the publishing of monographs can focus rather narrowly on the business of CV construction and hasty perceptions of what is in the best interests of their department’s REF submission. Others take a wider view. But was Hill signalling a decisive shift of priorities for the monograph away from prestige publishing labour economics and its expensive badges of certified excellence to a regime where reach, impact and the needs of readers and libraries for affordable access to long-form research outputs may have a higher weighting than they do now?

The end is probably not nigh

Readers have remained somewhat to the edges of previous episodes of the monograph debate, with more attention over time given to the needs of a top end of elite specialist researchers to get published, as Adema has noted about the deliberations of Crossick. More recently (post-Redux) there were warnings about the dangers to academic trade titles, as signalled by Finn and Fisher – though surely an exception in the making and easily accommodated (as Eve points out and later Research England itself). There was also concern about the real ‘need’ for a huge amount of extra funding to meet the mandate challenge, with figures as high as £20,000 per title said to be needed. Scepticism in this context was also expressed in informal conversations at the event and in social media about the potential greater role for the UK’s new university presses in respect of Stone’s bold proposals at Redux for Jisc to develop a toolkit or dynamic purchasing system (DPS) to enable new publishers in the university sector to contract with suppliers in the field. As always, the call for new business models has echoed loudly and predictably from several quarters.

Of course, we already have a sustainable (for publishers) commercial monographs system. It is the one we have already that has shown extraordinary longevity. Why not just continue publishing more monograph titles and raising prices? That path would require the least adjustment all round. As one commentator wryly noted, for an ‘“unsustainable” system, the system seems remarkably sustained.’ Many commissioning editors at not a few academic publishing houses will confess on a non-attributable basis to being fatigued by a market rationale where the price of a monograph always gets higher, the print run ever lower – assuming that a title is not fully print-on-demand in any case. This leaves the scope of what it is considered commercially practical to publish determined to an unhealthy degree by the only deep pockets left in the room, those of the richest part of the North American academy and a very select handful of other elite research institutions worldwide. It is not unusual for commissioning editors to be tasked with the job of commissioning up to 100 titles a year to sustain the throughput of a product line in a subject discipline. All these titles will retain a good gross margin, but only a tiny handful will be credited with the very rare capacity to ‘break out’ – as if high sales and a decent readership were some kind of jailbreak achieved against the vigilance of senior management gate-keepers. It can feel like that, and certainly some excitement in life is lost if you work on an academic list where the aggregate sales outcome may vary between at most 5% from a very solid reliable forecast taken from last year’s nearest equivalent. Whilst aggregator and more complex forms of bundling monograph sales and loan arrangements are now consolidated in the marketplace, the percentage of academic title sales that comes from print remains high even if latest wisdom suggests it is down 10% from the 90% Harvard Magazine reported as recently as 2015 – an astonishingly high figure even though the desire amongst many librarians is to move decisively towards ‘digital delivery’. Costs of monograph print distribution are high, perhaps around 35% of cover price, whereas marginal distribution costs for digital formats are close to zero. Clearly there is scope to shorten the supply chain and economize even if print remains a favourite with readers who generally like to work with digital and analogue alongside each other for deeper engagement.

Decoupling and its consequences

Habitués of the scholarly communications world will be aware Hill did not mention the notion of decoupling for the first time at Redux and it is a notion with wider and longer roots. It is reasonable to ask, therefore, if there are any conclusions to be drawn from the original paper – the inspiration he cited at Redux on the topic of decoupling scholarly journals – an article by Priem and Hemminger from 2012. Might the assumptions that fed into this enthusiasm for decoupling offer some clues as to the nature of the transformation ahead? Actually, the answer is yes: if not a magic formula for reallocating library budgets or a fit-for-all-purposes business model then, to judge from Priem and Hemminger, it is our old trio of friends, technology, markets and disruptive innovation, who appear to be the answer to the stranglehold of publishers in the journals market and, by extension, Hill suggests, the books market too. The authors explain decoupling in the following ways: ‘In software this means making the pieces of the systems as small, distinct, and modular as possible. The basic providers of scholarly publishing should not be publishers or journals, but smaller, more specialized, more modular services.’ Heavy-duty disintermediation is thus envisaged with publishing functions disaggregated and decentralized away from the publisher: ‘scholarly communication à la carte … at the best price’. Innovation in Hill’s vision also appears to be a major upside of such a development, though here I share the scepticism of many traditional publishers over the true scale of real demand for this.

The abstract for Priem and Hemminger looks to a rapidly evolving ‘marketplace of tools’ responding to ‘new technologies and users’ needs’. From a publisher’s point of view, there are problems with the article’s three categories of functions of a journal: certification (elsewhere called ‘badging’), dissemination and archiving. In collapsing down editing functions, including structural content editing and commissioning as well as copy-editing and proofreading, into subcategories of preparation and publication (themselves subsections of a crude broader heading of ‘dissemination’), it is certainly not giving those activities the recognition they deserve. For scholarly communications to be improved and radically remixed, the authors further explain that nothing short of full-scale disruption is going to achieve progress: ‘We suggest that no amount of activism or innovation aiming to correct closed publishing models or broken certification models will succeed in the current system that closely bundles all the functions together.’

This future is modular, with the academic author the consumer of discrete separable services able to pick and choose what he/she wants: ‘The open intellectual market will provide incentive for scholars to patronize preparation services whose work consistently broadens audiences and boost impact’.

Anarchic competition will determine the outcome of this experiment, with ‘search’ being left ‘to freely evolve driven by market forces’ – the same market forces that have seen Google sequester over 90% of search traffic in major markets, effectively ending competition. Established publishers in this model can look forward to a leaner future as evolved ‘lean, responsive, certification providers’. The decoupled journal operates through the ‘techno-anarchism’ of the web, embracing, ‘a laissez-faire approach to regulation, preferring to give the market the maximum possible space to innovate’. And academic authors could thereby look forward to ‘a future in which administrators and funders value a certain time-tested, quantitatively based certification the same way they would value publication by a top journal today’. [My emphasis]

This is a prospect unlikely to endear itself to many in qualitative humanities professions or denizens of new subdisciplines such as critical data studies (casting a critical eye on data utopianism). Such techno-optimism from 2012 looks a little starry-eyed in 2018, but read in a certain way is an exhilarating vision (much in the way of an expected cold shower can be). It offers more potential upsides for monographs in my view when compared to just extending the status quo until the business-as-usual monograph engine battery finally expires. Hill seems to be envisaging a future with authors contracting in a modular fashion and even also providing some of the services that would be required by scholarly communications. In his presentation he cited the JSTOR research tool Topicgraph, (at-a-glance topic mapping of books), the open publishing tool leanpub.com (‘publish early, publish often’, 80% royalties!) and the life sciences post-publication peer review service F1000Prime. (Leanpub describes its platform as a ‘combination of two things: a publishing workflow and a storefront’; F1000Prime and its communities are about supporting the need to ‘discover, read, annotate, write and share scientific research’.) It should be said that post-publication peer review and layered journals, also in this mix, are not entirely new. In addition to these, there are plenty of innovations out there not cited by Hill, such as the Open Review Toolkit, the browser-based production system Editoria (see also van Rijn on this), the user-friendly typesetting software Booktype (used at the University of Sussex for experimental ‘booksprints’) and the research data repository and communities platform Zenodo. Others, such as Belshaw’s open beta books, also mentioned by Hill, look a step or two further away from making a big mark. A fuller if not exhaustive list of options for journals (not books) – if that were even possible – is offered in Michael’s blog post for The Scholarly Kitchen (SK). It is also worth noting in this context that Priem, the co-author of Priem and Hemminger cited earlier), is also the co-founder of ImpactStory, the altmetrics non-profit that has also created the Unpaywall browser extension.

Self-publishing

One name for the use of such services which few seem keen to speak out loud is self-publishing, for which further services of different kinds specifically for monographs are beginning to emerge in rapid order. Examples are Glasstree and the more experimental Manifold (the latter for publishers but also individual scholars). And in any case, there are signs that (minus some of Priem and Hemminger’s free market fundamentalism) something not entirely dissimilar is beginning to surface in the thinking of some authors and author/publishers. One advocate for DIY, specifically in the context of decoupling monographs and opening out access (or ‘unbundling’ – his term) is Green– see also his LSE Impact Blog. He notes, ‘For books, despite initiatives developed by organisations including Open Access Publishing in European Networks (OAPEN), Knowledge Unlatched, and Open Book Publishers, progress has been fairly glacial. At the time of writing, there are just over 8,000 titles listed in the Directory of Open Access Books which – considering that Springer alone offers nearly 280,000 titles from its online bookshop – suggests that the proportion of books published open access has yet to reach 2%.’

Green suggests a kind of Netflix or Amazon Prime model for individual scholars to subscribe to which, although a promising notion, would probably be best not left entirely to corporate digital monoliths to hold the keys to. Another DIY enthusiast is media studies scholar David Gauntlett whose argument for a ‘publish-then-filter’ model and for academics to ‘orchestrate their own presence’ and ‘take responsibility and not leave it to others’ is from a school of thought that is not looking to an academic’s host organization to provide assistance even if he advises working with other individuals: ‘I’m saying you need to push and direct every aspect of an academic life, rather than expecting it all to be done for you by Professional Services … Ask people for help, and also, of course, help them.’

For a few individual scholars, the notion of contracting services will look attractive and there is clearly potential for some academics to move into various types of book creation directly. However, that would undoubtedly seem like another burden to many, given the small percentage of academic workload that is freely available for book writing and researching – less than 10% according to one study. Many active researchers – perhaps the majority of those who also teach in the humanities and social sciences – may be less than thrilled at yet another skillset to master. Several publishers in these fields have also reported indifferent responses so far to open peer review or to the idea of complex community-operated publishing workflows. Writers are also deterred by the real and exaggerated problems surrounding predatory publishing practices for the unwary. These are just some of the reasons that suggest a faltering take-up at best for individuals as informed awareness of open access has yet to become widespread amongst all types of academic disciplines, as Johnson’s ethnographic work has indicated.

So, to summarize, Priem and Hemminger’s vision is first and foremost of individuals contracting modular publishing services from a host of new providers. Bar wholescale funding on a scale that is not imaginable – even Eve et al.’s conservative forecast foresees only a 75% conversion rate as a starting point – it seems possible, even likely, that some humanities monograph publishing will inevitably head down this route. Gerven Oei makes a related point with a clarion call for the imaginative pursuit of humanities knowledge which we may be too blind even to know we should be more attentive to. His exemplar is his own field of Nubian studies: ‘If the minor humanities, the study of that which was and is spoken and lived on the edge of extinction, are to survive, open-access publishing, a publishing that is radically open and welcoming, may be the only way. And as such, it may be the only way to continue to revive and rejuvenate the humanities …’

In a similar vein, a long-serving humanities publisher such as myself can only wish the best for the many recent initiatives, like the Humanities Commons. Notwithstanding the traces of the beginnings of a funding path for monographs laid out by Eve et al., the labour-saving devices of technology are perhaps inevitably going to be considered to be the disruptive key to the problem of the humanities monograph’s very long tail. This particularly relates to scholars in the humanities who cannot be deterred from spreading the word about their research even if they may receive little external help besides that of their own networks and subject-specific communities.

Corporate capture

But is it reasonable to expect start-ups and new services to serve a significant portion of academic authors alongside established academic presses? Or for authors to be manoeuvred into the role of value-seeking consumers as opposed to the knowledge producers they see themselves as primarily? The history of self-publishing in the trade sector can read uncannily like one of Amazon dominance (print and especially digital) echoing Google’s capture of the search market. Amazon’s control over the e-book market looks very much the only architecture to be left standing now the noise around a small minority of self-published authors has died down. So it could transpire that after a period with numerous start-ups and innovations, market concentration might arrive in the form of a few entrenched and powerful players once the dust has settled.

Figures vary, but my own quite recent experience as a trade publisher was of seeing over 90% of e-book sales from Kindle, and that dominant share may even be even higher for UK publishers. Kindle’s competitors have tried and then failed to compete against Amazon, and e-book momentum and competition has stalled. Many successful self-publishers in the trade sector recommend distributing exclusively via Amazon and not even to consider other channels of e-distribution. In this lies a warning of corporate capture, as this industry commentator notes (below). What if the information referred to here was that of scholarly communications?

‘The information asymmetry between Amazon and the rest of the book industry – publishers, brick-and-mortar stores, industry analysts, aspiring writers – means that only the Seattle company has deeply detailed information, down to the page, on what people want to read.

‘And Amazon can keep doing what it does best, without any transparency to the public, readers, or the rest of the industry. Using its highly attuned proprietary data, it builds a bigger, more pervasive product with every turn of the page: the machine that knows readers.’

Something is lost once the competition between early innovators dies down, and it is not just the competition that is lost when a few key players dominate. Other vital elements of a diverse publishing ecostructure also tend to get sidelined. If not Amazon or Google, then other major players in the terrain such as Elsevier, Taylor & Francis Informa and Wiley-Blackwell could loom large here. Microsoft, perhaps? All of these seem to have an eye to purchasing digital innovators in different segments of any number of decoupling domains or, as another SK ‘chef’ has observed, ‘By taking a more active role in discovery and access for their publishing competitors, Elsevier and Holtzbrinck (through Digital Science) are positioning themselves in a highly influential position at key chokepoints’. [My emphasis]

One widely feared future for mandated monographs could be termed a re-run (with a few variations) of journals’ move to open access. Researcher Stuart Lawson, who has spent a lot of time examining article processing charge (APC) data and researching UK OA policy puts this bluntly, calling the UK’s ‘neoliberalist’ OA policy the root cause of ‘the capitulation of the Finch report to an APC-funded version of gold open access, specifically designed to allow incumbent publishers to maintain existing profit levels’. More recently, Helen Snaith of Research England, when firmly ruling out direct funding for OA monographs, suggested it might even cause such higher prices saying, ‘There may be a correlation between direct funding for OA and increased charges’.

Elsewhere the experience of journals with APCs is continuing to receive a mixed reception, with prices continuing to outpace inflation and leading to impatience on the part of funding partners and libraries worldwide. There are multiple instances of libraries reconsidering the virtues of the ‘big deal’ in keeping prices down. Criticism of the OA dispensation in journals also comes from other quarters. What if the corporate publishers were to succeed in controlling the monographs OA market too? Shifting to a gold OA model might see a repeat scenario, Suber fears, whereby ‘gold OA increases rather than reduces the cost of scholarly communication, and so confounds BOAI’s [Budapest Open Access Initiative] expectation that open access will be more cost-effective’. And, unlike journals, the fear is that books will simply not be funded to the level required (if at all – see above) through existing mechanisms and will not be published. This too will widen and not narrow ‘the North/South knowledge divide’.

If collectivist OA advocates are unhappy with the way things have panned out with journals, then so apparently are voices sympathetic to big corporate publishers. Threats and sometimes even action on the part of universities or even the entire academic library sectors of countries to cancel big deals has ruffled the feathers of Joseph Esposito, chef at the publishing blog SK. He has accused (perhaps a little tongue-in-cheek) the library sector of mafia-style tactics of ‘reaching for the muscle’ and forming alliances with ‘unsavory characters’ notably the pirate portal Sci-Hub, to enforce terms no publishers ‘could ever countenance’. It seems too easy to counter that with the point that extortion rackets tend to demand more money each year for protection of their profits, not lower prices for the public good or wider access. Lauding the economies of scale that the big publishers offer in another of his posts ‘Why Elsevier is a Library’s Best Friend’, the immediate question would have to be, ‘What economies of scale, for whom’? Kent Anderson, another SK chef, said in a similar vein that institutions that cancel big deals are making a ‘selfish’ and short-sighted decision (reported and attributed to Anderson in an Inside Higher Ed report), a concern he also wrote about in a recent article for The Scholarly Kitchen. Here he compared terminating a big deal to cancelling a newspaper subscription: ‘Journalists lose their jobs, local media collapses and soon no one knows what’s happening inside government.’ The first article in ‘Comments’ to the original SK article was, ‘Yeah, except … newspapers produce their own content. They don’t sell their subscribers’ articles back to them’. The combative tone of industry insiders is eyebrow-raising, especially as journals seem relatively limited in their exposure to serious legal competition – the sort that actually drives prices down, which is incidentally how free markets are supposed to work – and remarkably hostile to its customer base. Did I mention library budgets?

Collaborators, consumers and collectives

In the midst of this hoo-ha, the current expanding landscape of new university presses has been mapped most recently by Adema and Stone with a notable addition LSE Press, launching on 15 May 2018, now followed by Dublin City University Press, Ireland’s first OA university press. A more optimistic scenario here is evident where groups of scholars, libraries and the odd publishing professional work together to support collaborative open access. Already there are many more options for universities, groups of academics, departments or individuals to consider for publishing ranging from the Ubiquity Partner Network, university press hosting available via Ingenta, self-publishing (see earlier), Open Book Publishers (Publishing Services and Partnerships), OpenEdition Books (look out too for the future work of HIRMEOS) or the option to work under the umbrella of another university press as Lund University Press have chosen to do with Manchester University Press. Open Humanities Press is a long-standing academic collective with a highly distinguished editorial board. There is the born-digital publishing platform of Fulcrum, based at Michigan, currently in beta but likely to be a point of call for publishing projects that go beyond the ‘vanilla’ monograph. A key decision for an institution to consider is branding – that could be called the full university press option – or perhaps just undertaking some scholarly communications activity supported by a library.

Such initiatives could be one-offs or ongoing, as with journal or book series. Local pressure plays a key part in whether the final aim is to look at monographs or journals publishing or extend into the area of open educational resources. For the latter, the online repository MERLOT offers some inspiration, as does the Open Textbook Library. The journals territory has become more competitive recently with Veruscript offering an alternative in the UK to Ubiquity Press, and Janeway, a new platform developed by Martin Eve and Andy Byers from which the University of Huddersfield Press will relaunch its journals publishing. Also worthy of consideration are Project Muse, the Public Knowledge Project and its open software OJS, which supports thousands of journals. Lastly, some pin their hopes on a future European Open Access Platform and other large-scale public or funder infrastructure platforms, but the prospects here are far from clear as Ross-Hellauer et al. suggest. Crowdfunding, local or national benefactors, or the support of subject sectors is underpinning these initiatives so they collectively embody different visions that are potentially not so isolating for authors or so dependent on remote commercial entities. The pace is picking up for books too.

What most of these ventures and entities suggest, though, is not the radically individualized algorithm-driven landscape of Priem and Hemminger but something nearer to a collective initiative, and that is even before mentioning enterprises such as the Open Library of Humanities, the Lever Press and Knowledge Unlatched, the subscription schemes of Open Book Publishers and Ubiquity Press, all of which still wrestle with the ‘free rider’ issue but are active, professionally managed and are demonstrating there is capability of undertaking academic publishing functions with the public good in mind.

Much concern has focused on the issue of who will pay for the transition to monographs, thoughtfully discussed by Eve et al. and briskly addressed by Snaith. There is one simple answer to this, which in essence is no different from the answer to the question of who pays for the existing system: we all do. Yet the issue of inequality should be addressed via collaborative projects that take the matter of economic means into account at all levels. This is something that would be unlikely to happen in a market individualist free-for-all where there would be a rush to service the top end. Risks are run whichever route you look at, but the most important one to my mind is captured by this heading of ‘inequality’. Here the perfect may be the enemy of the good in terms of disparaging the limited progress towards open access for all. If there is no progress or no impetus to harness the economies of technology, then ‘pay to read’ remains an earlier roadblock that disenables many scholars, students and early career professionals before they even get to the ‘pay to say’ problems, with so much research dependent on and building on the work of others. This is not to suggest that potentially high book processing charges are not a problem. As Bianca Kramer and Joeren Bosman point out about APCs (often inexpensive) linked to tangible benefits, ‘quality and speed of peer review, manuscript formatting, or functionality and performance of the publishing platform’ take second place and matter less in a pressured job market than artificially inflated values tied to ‘impact factors and journal brands’. It has happened with journals and there will be those hoping it might prove the case for books. Yet production costs do not need to be that high if you discount the games and rituals of ‘ranking’; these costs could come down further.

Ultimately, how exclusive can we afford research to be? Consider the case of any number of humanities journals or academic-led publishers (e.g. the Radical Open Access Collective) that operate under time and financial pressure but still publish successfully and are slowly building the prestige that will be vital for survival.

Conclusion: look to libraries?

Contrary to the stances taken by Esposito and Anderson, I would argue that whatever their partiality, libraries and librarians are still in the best place to assess where the scholarly communications infrastructure should go from here. They are increasingly operating at the digital intersection of research, readers and budgets and becoming more engaged with publishing, whether or not undertaking it themselves. Of the current journals publishing system, it is telling to note that according to the much-cited list (and predecessors) of the 102 ‘things’ publishers of scholarly journals do, recorded by Anderson, only 11 he suggests are primarily for the benefit of libraries (compared with 74 for authors). Barely a week goes by without awareness of a new service or platform, but it is far from clear whether most scholars will relish – though some might – adding another skill set and additional online administration to workloads in order to replace a system many consider functional if not ideal. Libraries themselves almost certainly need to bring in more publishing expertise or acquire new skills here. Innovative university presses across the world are already retooling to operate in an environment which is competitive but one in which they aim to serve the demands of readers and authors, students at their own universities, and provide truly broad dissemination.

On top of library publishing initiatives of diverse kinds, I see the inevitability of some author self-publishing within the monographs field – it is happening and its scope at the time of writing is probably still to be mapped accurately even for journals with more quietly published titles (good, bad, in-between) than can be readily audited. Scaling up from the individual author having to navigate a multitude of web options to the level of departments, universities, groups and societies, undertaking considered collective publishing initiatives seems preferable. Leaving academics to navigate solus a myriad of start-ups and modular tech options (or, worse still in the long run, having to interface with the detailed terms, conditions and administrative processes of one digital giant who calls all the shots) may not offer the diverse publishing ecology many are calling for. Traditional publishers command vast resources, and do indeed have some multiple efficiencies honed for decades, but such capabilities need to be redirected from the construction of thick paywall-heavy lobbying and public relations initiatives instead towards the lowering of costs to libraries and readers, to marketing their wares via activity (not hoarding them through the creation of an artificial scarcity) and to spreading the word about the value of their publications and services presented to the world at affordable prices. They have the expertise and there is room for many of us to work in a diverse publishing environment even if jobs will be inevitably redistributed. There is a need to challenge the narrative put out about specialized research which is that no one is really interested anyway (or rather has sufficiently deep pockets for), and move towards a more equitable dispensation where the benefits of low-cost digital distribution can be harnessed and spread for impact and equality of scholarly opportunity. My favourite instance here is the UCL Press publication from 2015, Participatory Planning for Climate Compatible Development in Maputo, Mozambique, which has received 8,575 views and downloads since publication in November 2015 – an inconceivably higher number than the average monograph sale of 200 reported by Knowledge Exchange and a true testament to the power of open access to reach audiences beyond a tiny club of high-net-worth libraries.

Back at the Redux conference, Sarah Kember of Goldsmiths Press sounded more than a note of caution about ‘cyberlibertarianism’ and ‘technological disruption and competition’, preferring ‘a collaborative, institutional model of scholar-led publishing and shared infrastructure’. It was well received and seemed an appealing vision to diverse constituencies within the audience. Yet, as things stand, it seems that when it comes to OA monographs, we will get a complex mix of all these elements: traditional stakeholders playing the prestige card with extra layers of gilt, self-publishing, collaborative projects and library publishing. The opportunities out there for lower costs, more freedom and diversity will be squaring up to the tangible dangers of APCs 2.0, oligopolistic high pricing and heavy levels of control. The fine traditions of university press publishing of many shades could emerge stronger and more relevant. And there will much debate and several ways and means and routes to publication about which to be ‘agnostic’.