The 2002 UNESCO declaration that officially defined the open educational resources (OERs) movement, which seeks to promote free online educational materials licensed for reuse, was notable for its inclusion of social justice issues, calling for material to be redistributed to benefit those with limited access to resources. However, the OER movement has not always kept its focus on social justice issues, and while OERs in and of themselves do help to address some issues, such as financial barriers to learning materials, they do not automatically address all social justice issues. In some cases, they even create or repeat barriers. This is especially true for individuals with disabilities and open textbooks that are not designed with accessibility in mind.

Although not all people with disabilities face barriers when using a typical OER, accessibility is important for those who rely on a screen reader or other assistive technology and inclusive design to interact with websites and other digital files and programs. The World Wide Web Consortium (W3C) says that ‘Web accessibility means that websites, tools and technologies are designed and developed so that people with disabilities can use them.’ This includes people with auditory, cognitive, neurological, physical and other disabilities. Web accessibility can include adding alternative text for images, making a website easily navigable by using only a keyboard and ordering content so that it makes sense and users can easily jump to the content they need.

Article 9 of the United Nations (UN) Convention on the Rights of Persons with Disabilities, adopted in 2006, addresses accessibility as it relates to, among other things, ‘the design, development, production and distribution of accessible information and communications technologies and systems’. It calls for the elimination of any obstacles that may hinder persons with disabilities from participating fully in various aspects of daily life and from accessing needed information. Many UN member states have enacted their own laws or policies concerning accessibility. These include the United Kingdom’s Equality Act 2010 and the Republic of Korea’s Act on Welfare of Persons with Disabilities. Likewise, the European Union’s Web Accessibility Directive, adopted in 2016, and the European Accessibility Act, adopted in 2019, both aim to ensure member countries follow a set of web accessibility standards. While the former is limited to public sector websites and applications, the latter applies to the private sector as well and to a variety of digital products like ATMs and ticketing machines.

In the United States, two federal laws – the Americans with Disabilities Act and the Section 508 amendment of the Rehabilitation Act of 1978 – govern and enforce web accessibility for most websites in general. Dozens of lawsuits have been filed against American universities for their inaccessible websites, and more are coming with the move to online education in the face of the Covid-19 pandemic. Courts have been clear that online educational materials must be accessible. This would appear to include OERs.

So far, most of the literature on accessibility and OERs has focused on efforts by various groups to develop platforms, systems and processes that better enable instructors to create and find accessible material. Fewer efforts have focused on the current state of OER material, however. This study seeks to address that gap by providing an overview of a random sample of open textbooks that were evaluated based on criteria selected from the W3C’s Web Content Accessibility Guidelines (WCAG).

Literature review

The WCAG guidelines have become the gold standard for creating and evaluating web accessibility. The most current version is WCAG 2.1, which was released in 2018, although WCAG 2.0, released in 2008, is still common. The guidelines include four broad principles:

  • perceivable – all users should be able to perceive all web content and components. For instance, web page editors should include alternative text to describe important contextual information in images for visually impaired users
  • operable – all users should be able to navigate a website
  • understandable – all users should be able to understand the content and operation of a website
  • robust – a variety of assistive technologies can interpret a website’s content.

The guidelines include more specific rules under each principle, which are then rated as Level A through to AAA, with Level A as the most basic and essential requirements. WCAG includes guidance for PDF documents, and the accessibility standards for EPUB files are built on WCAG. Although the WCAG guidelines are commonly used, people have critiqued them, with complaints claiming the guidelines are too opaque, outdated, not extensive enough and overly complicated.

Not all digital formats are equal when it comes to accessibility. People have long criticized PDFs for the difficulty involved in making them accessible, noting that most people instead prefer websites using HTML coding. Meanwhile, the DAISY Consortium, a not-for-profit organization that works to support EPUB files, helped create EPUB3 with accessibility in mind.

A number of automated tools exist to help anyone evaluate digital material for accessibility. For instance, browser extension tools such as WAVE and Siteimprove will run automated website evaluations, and Adobe Acrobat Professional can run automated checks on PDF documents. However, these automated tools cannot catch everything and will prompt users for specific areas that need a manual check. For instance, one study found that while the tools excel at determining whether any alternative text is available for images, they are incapable of determining if the text actually describes the image. Another study found that the tools are sometimes overly eager, flagging the same easily fixed error multiple times because it appears on every page of a site.

A number of studies have looked at the accessibility of e-books in general, while others have focused on the accessibility of OERs. A scoping review by Moreno et al. identified 56 articles and research projects that had looked at the accessibility of OERs and massive open online courses (MOOC). Common topics included developing guidelines and best practices and developing accessible OERs, while just six looked at verifying the accessibility of OERs and MOOCs. A more recent scoping review by Zhang et al. that focused solely on OERs and accessibility found 31 articles, with the most common topics identified as system design and framework/architecture. Of all the articles, 16 looked at evaluating OERs for accessibility in some way, although this included the sites and platforms that house OERs. The authors noted their review found room for improvement of the accessibility of OERs and encouraged participants to use the WCAG guidelines.

Research articles that have looked at a specific area of OERs and accessibility have often focused on efforts to support others in this work. For example, two projects attempted to create a tool to help people author accessible OERs, while another attempted to create a platform that government workers could easily use. Others have focused on creating a platform to help users discover accessible OERs.

Some studies have looked specifically at evaluating OERs for accessibility. For this, Navarrete and Luján-Mora suggested a mix of automated tools as well as manual user testing, while Ávila Garzón created a rubric based on the WCAG guidelines. However, another study called on OER participants to go beyond WCAG.

Navarrete and Luján-Mora evaluated the home page of four open courseware websites using four automated tools and based their evaluation on the International Organization for Standardization’s Guidance on Usability standard. The study found none of the websites did well in accessibility, with the authors noting, ‘We found that web accessibility is still a pending issue in all the websites, with distinct levels of severity’. In another study looking at 18 South American repositories that house OERs, Rosa and Mortz found that all of them had some errors at WCAG Level A, ranging from four to 12 errors, meaning none of them met even basic accessibility guidelines. Neither of these studies focused on the content of the actual OERs, however.

The only study that appears to have done so focused on just 20 OERs. The study evaluated OERs in eight categories, although it was unclear how they determined these categories. The categories with the most common issues included tables (10) and images (eight), whereas 19 books passed in the multimedia, font and color contrast categories. The authors noted that the OERs were clearly not made with disabled students in mind, saying, ‘When OER [sic] are created with faulty assumptions of students’ mental and physical abilities, OER [sic] become part of a larger social problem that systematically excludes students with disabilities from equal education’.

To fill this gap in the literature, we evaluated a larger number of open textbooks using a rubric adapted from WCAG 2.1. The study aimed to answer how accessible open textbooks are, including in specific WCAG categories and among specific file formats.

Methodology

This study sought to evaluate a random sample of more than 300 freely available textbooks using a rubric we based on WCAG 2.1 (the current version at the time of writing). Our rubric focused on the portions of WCAG 2.1 that we identified as applicable to the types of OERs we sought for our sample set and that were most likely to be under a textbook author’s control as opposed to a web developer’s or publisher’s. For example, we did not incorporate guidelines pertaining to audio and video content into the rubric as most open textbooks do not appear to incorporate them. We evaluated textbooks for conformance at the A and AA levels. However, we did include one AAA level, ordered headings (i.e. does a Heading 1 come before a Heading 2) because of the relative ease of implementing them using common word processing software and their importance to effective screen reader use.

The rubric consists of 16 categories, each corresponding to a particular WCAG guideline, and was maintained in an Excel spreadsheet:

  • alternative text – does any non-text content have a text alternative, or is it marked to indicate it is a decorative item, which wouldn’t require any alternative text? (Level A)
  • coded elements – are all elements appropriately marked with HTML codes/tags (i.e. paragraph, links, strong, etc.)? That is, coding standards set by HTML version 5 (the most current at the time of writing) are followed. (Level A)
  • outdated coding – does not use <i> for italics, <b> for bold, and other incorrect and outdated codes. With the adoption of HTML 5 and its standards, certain tags are now considered outdated. For example, the ‘strong’ tag should be used to bring attention to words or phrases instead of the older <b> tag. (HTML only, Level A)
  • visual cues – information is not only conveyed in the formatting of text or other visual cues. (Level A)
  • tagged PDF – for PDF file types, is the file tagged at all? That is, are all of the text’s elements labeled as what they are in relation to one another (i.e. heading, paragraph, image, table, etc.)? (Level A)
  • tables – are any tables appropriately tagged/coded? (Level A)
  • lists – are any lists appropriately tagged/coded? (Level A)
  • ordered headings – are headings properly ordered, such as a Heading 2 falls under a Heading 1, a Heading 3 falls under a Heading 2, and so on? (Level AAA)
  • tagged links – links are appropriately tagged so that they are recognized when tabbed through. (PDF only, Level A)
  • content order – is the content ordered to make sense for someone tabbing through the document? (Level A)
  • color contrast – is the color contrast appropriate? Individuals must be able to differentiate content on the page, so there needs to be enough contrast between the foreground (i.e. text) and background colors, with a ratio of at least 4.5:1. (Level AA)
  • images of text – does not rely on images of text except as needed. (Level AA)
  • title – does it have a title and is the title appropriate for the content? That is, the title is descriptive and has meaning. ‘An Introduction to Calculus,’ for example, would be appropriate, while ‘calc_intro_file_final.pdf’ would not be. (Level A)
  • descriptive links – uses descriptive link text. (Level A)
  • descriptive headings – uses descriptive headings and labels. (Level A)
  • language – is a language set for the document/website and are any changes of language also indicated? (Level A)

Books either met each of these guidelines, receiving a ‘pass’, or did not and received a ‘fail’ in that category. We did not track how many times a book failed in a category or the severity of the fail. As noted, some of these apply only to certain formats (i.e. HTML or PDF). In such scenarios, we gave books a ‘not applicable’ designation on the rubric for that category. We also used the not applicable designation in situations where we could not assess a book’s compliance with the guidelines, such as when the author(s) used no links, lists or tables in the text.

The study’s team consisted of two librarians and a student research assistant. We tested the rubric by having each of us independently evaluate a small set of books from the sample and then compared our ratings. We repeated this process three times and reached an 86% agreement. Moreover, we met with our student research assistant each week to resolve any issues she was unsure about and to ensure consistency throughout the course of the project. We also turned to our university’s accessibility co-ordinator/trainer whenever we had questions about how we should interpret WCAG criteria.

All of the study’s open textbooks were found using Openly Available Sources Integrated Search (OASIS), an OER search tool developed by the State University of New York at Geneseo’s Milne Library. As a result, our sample, though representing a range of disciplines, includes a significant number of texts from creators based in North America. We also cannot be certain what proportion of the total amount of open textbooks our sample represents. First, we used a random number generator to select a number between one and 10, which returned seven. Starting with the seventh OASIS search result, we then chose every fifth book in the list after that. In cases where the OASIS record did not point to a specific title but a mini collection of textbooks, such as items available through the LibreTexts project, we started by counting the number of books in the package (the first in the list as number one, the second as number two, etc.). We then used a random number generator to select a number within that range to determine which of the books to include in our sample. Our initial sample set consisted of 393 books.

As we worked through the list of textbooks, we removed any exact duplicates from the sample set as well as books that turned out to be no longer available (i.e. they were taken down between the collection and evaluation portions of the project). We also excluded books that, upon closer inspection, fell outside the project’s scope. Specifically, we excluded OERs that were more akin to a self-paced online course or module or those that were merely a list of suggested readings, lecture recordings or assignments. Ultimately, our final sample set consisted of 355 books in four formats: HTML files/websites, PDFs, Microsoft Word documents and EPUBs. If a given book was available in multiple formats, we chose to prioritize the EPUB or HTML file – thought to be the most accessible format types – and only evaluated the PDF/Word versions of items that were only available in those formats.

Due to time constraints, we evaluated the first 20 pages or, in cases where pagination was non-existent, the first 10,000 words of each textbook. To do so, we relied on several different tools including Siteimprove, for books in HTML/website form, Ace by DAISY and Calibre for EPUBs, Adobe Acrobat Pro for PDFs, Microsoft Word’s built-in accessibility checker and the Paciello Group’s Colour Contrast Analyser across all file types. For HTML and EPUB books, we also regularly checked the source code to verify the results from the tools. In some situations, we relied on free screen readers – NVDA and Macintosh’s VoiceOver – to determine if they could handle a section of text we were unsure about (e.g. mathematical equations). Furthermore, we limited our evaluation to the books’ main content and did not look at associated or supplementary materials the authors linked to rather than embedding directly into the files themselves. When evaluating HTML, we did not penalize books for inaccessible design elements under the platform’s control, such as the platform’s logo or main menu.

During the accessibility evaluation process, we had to settle on a consistent way of interpreting the extent of a guideline’s reach. For example, WCAG 2.1 calls for descriptive link text, but some citation styles require the use of full URLs. We, therefore, gave books a pass in that category if they only ran afoul of the guideline in clearly marked reference lists and bibliographies. We likewise were lenient in assessing the descriptiveness of link text and headings. Empty headers – elements tagged as headings but without any text – were given a pass, unless we could tell that they were clearly being used to cheat automated accessibility checks looking at heading nesting order. Additionally, when evaluating PDFs and EPUBs, tools automatically checked the entire document, even though we were focused on only the first 20 pages. Nevertheless, we decided to pass, and keep track of, books that included images within the first 20 pages that had alternative text, even if we could see that other images elsewhere did not have alternative text. For non-HTML file types, like PDFs, any tagging earned a pass in that category, but the first 20 pages had to be tagged appropriately to meet the ‘correctly uses headings coding/tags’ requirement.

Results

Our final sample set consisted of 355 textbooks in total, as shown in Figure 1. Of these, 99 (28%) were HTML/websites, 124 (35%) were PDFs, 23 (6%) were Word documents, and 109 (31%) were EPUBs. As noted above, not all of the 16 rubric criteria applied to every book format: 15 criteria applied to PDFs, while 14 criteria applied to the other three formats. Across all formats, the average number of fails per book was 5.93, and the median number of fails was 6. The average number of passes was 7.52, and the median number of passes was 8. By format type, HTML, Word documents and EPUBs all had a median of five fails, whereas PDFs had a median of 8 fails (see Table 1). The average number of fails per textbook was 7.42 (PDF), 5.30 (HTML), 5.26 (Word) and 4.95 (EPUB).

Figure 1 

The sample set of 355 textbooks broken down by format type

Table 1

Passes and fails by format type

FormatAverage Number of FailsMedian Number of FailsModeNumber of Applicable Guidelines

PDF7.428915
HTML5.305514
Word5.265314
EPUB4.955414

In terms of how often books failed, the maximum number of fails for any textbook in HTML format was 11 (out of a possible 14), and the minimum number of fails was one. The minimum number of fails for any textbook in PDF format was also one (out of a possible 15), but the maximum number of fails was 13. The minimum number of fails for any textbook in Word format was three (out of a possible 14), and the maximum number of fails was eight. For books in the EPUB format, nine was the maximum number of fails any had (out of a possible 14). However, two EPUBs did not receive any fails. These were the only textbooks in the sample that were fully accessible according to the rubric. We also counted the number of books by file format that had three or fewer fails. For HTML files, this was 17 of 99 books (17.17%), and for PDFs, it was only eight of 124 (6.45%). Twenty-seven out of 109 EPUBs (24.77%) and five out of 23 Word documents (21.74%) had three or fewer fails.

We also examined how the 355 textbooks performed in each of the categories on our rubric. As noted above, not all 16 of the categories applied to every format type. Additionally, not every category was applicable to every book. Some books, for instance, did not have any tables and received an ‘NA’ in that category. Table 2 provides the percentage of books, in any format, that failed each category on our rubric. A third column showcases this same data but with any non-applicable cases removed from the sample prior to tabulation.

Table 2

Percentage of fails by category (all formats)

CategoryPercentage Fails (NAs included)Percentage of Fails (NAs excluded)

Coded elements75.49%75.49%
Ordered headings74.37%74.37%
Alternative text66.76%80.34%
Images of text47.32%47.32%
Lists45.92%48.37%
Tables40.56%78.26%
Color contrast40.56%40.56%
Title40.0%40.00%
Descriptive links38.03%47.04%
Language28.17%28.17%
Content order27.32%27.32%
Tagged PDF23.38%56.46%
Outdated coding22.54%38.65%
Visual cues16.90%16.90%
Tagged links5.07%14.63%
Descriptive headings0.85%0.85%

Overall, the textbooks performed poorly in categories such as alternative text, with fewer than 20% of the books passing, and coded elements, with only about a quarter of books passing. Many titles used outdated tags (e.g. <i> instead of <em>), which can be problematic for screen readers. The books also overwhelmingly had inaccessible tables, even after removing those that did not include any tables at all from the sample (only 21.74% of them passed this category). Only about a quarter of the textbooks complied with the single AAA category in the rubric ordered headings.

The books did a bit better when it came to the two rubric categories based on WCAG criteria at the AA level – color contrast and images of text. More than half (around 60%) complied with the former, and just about half complied with the latter. Other categories that saw middling performance included descriptive links, lists and tagged PDFs. In contrast, the books passed more often than not when it came to categories like the use of descriptive headings and labels (99.15%), content order (72.68%) and language (71.83%), particularly when any non-applicable titles were removed.

We also analyzed how books published in the four format types performed in each of the rubric categories, with any non-applicable titles removed. Table 3 shows the failure rates for the four format types. All of the format types performed poorly in many of the same categories as the entire sample set. For instance, only 28.57% of HTML files, 5.26% of PDFs, 9.52% of Word documents and 28.40% of EPUBs received passes in alternative text. However, a larger percentage of PDFs and Word documents failed in this category. Each format also tended to have high passing rates in categories like title, content order and descriptive headings.

Table 3

Percentage of fails by category and by format type

CategoryPercentage of HTML FailsPercentage of PDF FailsPercentage of Word FailsPercentage of EPUB Fails

Alternative text71.43%94.74%90.48%71.60%
Coded elements76.77%81.45%78.26%66.97%
Color contrast61.62%34.68%17.39%33.03%
Content order1.01%24.19%8.70%58.72%
Descriptive headings2.02%0.81%0.00%0.00%
Descriptive links33.33%51.81%46.67%56.99%
Images of text59.60%38.71%52.17%44.95%
Language14.14%67.74%0.00%1.83%
Lists29.47%78.95%73.91%25.96%
Ordered headings69.70%26.47%69.57%73.39%
Outdated coding38.78%0.00%0.00%38.53%
Tables67.24%89.39%63.64%79.59%
Tagged links0.00%14.63%0.00%0.00%
Tagged PDF0.00%66.94%0.00%0.00%
Title2.02%92.74%65.22%9.17%
Visual cues33.33%12.90%17.39%6.42%

There were some exceptions, however. Only 7.26% of PDFs and 34.78% of Word documents had appropriate titles. PDFs also performed poorly when it came to tables (10.61% of books passed) and lists (21.05% of books passed). PDFs received a high passing rate (85.37%) in the one rubric category that applied only to that format type, tagged links. Interestingly, both HTML/websites and EPUBs had similar passing rates – 61.22% and 61.47%, respectively – in the rubric category that applied only to those two formats, outdated codes. All of the 23 Word documents passed the final category in the rubric, language.

Discussion

Lack of attention to basic accessibility practices

Our results suggest that OER content creators are not engaging in basic accessibility practices. They are failing to do things such as add alternative text to images, properly code/tag the elements of their documents, format tables and maintain a logical heading order – all of which are vital to successfully navigating a document using assistive technologies like screen readers. In contrast, categories the books did best in – properly ordering content, providing a title, using descriptive headings, setting a language – are practices not solely within the purview of accessibility. They make for easier reading and comprehension generally, or they reflect default convention or program settings. For example, all the Word documents included in the study had a language set for the document, as noted above. But this is likely to be a result of the default settings within the authors’ local versions of Microsoft Office and not a conscious decision that they made. All of this seems to indicate that many OER authors are not aware of accessibility and/or are not taking the steps necessary to make their creations usable for those with disabilities. Accessibility training and support for OER authors are, therefore, needed to ensure these resources are truly open to all. This support could come from the authors’ institutions, scholarly/professional organizations or OER platforms.

Moreover, this issue is present even when looking at each of the four format types individually, as they all performed about equally. They all failed in many of the same rubric categories, although PDFs had a higher average and median number of fails. PDFs are harder to edit and, therefore, can also be difficult and time-consuming to remediate. EPUB, designed for increased accessibility, was the only format type to include books that passed in all the rubric categories, but that format’s average overall was close to that of HTML/websites and Word documents. While programs used to create documents in these four format types have features that make accessibility easier to do, such as heading, list and table options within Microsoft Word, authors must know about and use them. However, OER authors might work with publishers and developers to release their works as HTML files/websites and/or EPUBs, even if the documents originated in Microsoft Word, as these formats are easier to adapt and remediate.

WCAG levels

Ultimately, most of the criteria we used in the rubric were Level A, meaning these are basic components that any website or digital document needs to meet to be considered minimally accessible. Therefore, it was concerning to see that the open textbooks struggled the most with Level A criteria. While some might see Level AA and AAA as optional, failure to pass any Level A criteria means readers with certain disabilities will not be able to interact with an open textbook at the level of non-disabled readers. Until the OER community can rectify these issues, open textbooks will continue to remain closed to some.

In contrast, the fact that the rubric’s two Level AA criteria both came in the middle for pass rates was heartening. These criteria, while not among the most basic of web accessibility criteria, both serve important functions for readers with certain disabilities. However, both saw room for improvement, and we hope OER authors will not ignore them.

Reliance on automated tools

We relied on a number of automated tools to assist us in evaluating the textbooks for accessibility but found, just as Silva et al. did, that they were imperfect helpers. We chose Siteimprove because this is our university’s official tool that all public-facing websites must demonstrate compliance through, yet we found early on we often needed to check the source code for potential issues. The same was true for EPUB files. The DAISY Consortium has created the ACE application tool to check EPUB files automatically. However, the tool could be confusing and was not always clear as to what the exact issue was, necessitating our use of the Calibre EPUB editor, which shows the source code next to what the reader sees.

Even checking the source code was not always enough. We occasionally ran into special issues, such as with mathematical equations, which we could not be sure how a screen reader would handle. In those cases, we opted for using a screen reader, as that seemed the only way to know for sure. Although this might sound intimidating for potential OER adopters who want to evaluate a possible textbook for accessibility, we found that it was not overly difficult. More exposure to and training in screen reader technologies would be likely to mitigate these fears and help reduce the time needed. Perhaps even better, instructors and authors can include users who are disabled in the creation and evaluation of OERs.

Although the automated tools could only help to a point in evaluating textbooks for accessibility, tools can be far more useful for authors in creating accessible documents. For instance, using Word’s preset text formatting, including the heading options, will ensure a fairly accessible document (although authors will still need to take some steps, such as adding alternative text for all images). But as long as a Word document is accessible, then authors can export it to a PDF or EPUB file while retaining the needed tags or codes for accessibility.

Authorial control vs. developer control

As useful as the text formatting tools are to authors, their advantage only lasts if the code behind them is accessible. We realized part-way through this process that some of the criteria we included were not always under the control of the author. For instance, we noticed books on one platform consistently failed the outdated coding criteria because they used <i> for italics instead of the now accepted code <em> for emphasis. The authors probably all relied on the platform’s text formatting options, including for italics, which would mean the platform developers had not updated the code (or potentially, the books were written before the code had been updated). In another instance, books from another platform seemed to always fail ordered headings, partly because ‘Chapter [X]’, which always appeared at the very beginning of a chapter, was a smaller heading that should have fallen under a broader heading but did not. When doing a test run on this platform, the premade templates that authors can choose from appear to force this choice on the creator.

Indeed, our rubric relied on a relatively small set of criteria from WCAG partly because so many are beyond an author’s control and instead can only be changed by a developer of whatever authoring platform is being used. Authors can only do so much to make their textbooks and other OERs accessible – the platforms that they use to author, house and disseminate them must also be accessible. While we expect authors to do a better job with what is under their purview, the OER community needs to do a better job of pressing these platforms to ensure they are fully accessible. At the same time, the platforms could help by giving authors more control over some of these options – preset templates, while helpful, can be too constraining, and allowing authors the ability to make changes could help solve some of these problems.

Recommendations

In sum, our recommendations are as follows:

  • academic institutions, scholarly or professional organizations and authoring platforms should work to raise OER authors’ awareness of accessibility as well as provide these authors with the training and support they need to make their creations accessible
  • the OER community should push authoring platforms to conform to accepted accessibility standards
  • these platforms should also work with authors to ensure their OERs are released in the formats easiest for others to remediate or adapt (i.e. as HTML files/websites or EPUBs).

Conclusion

Ensuring that OERs are accessible to those with disabilities is both a legal and moral imperative. Few studies, however, have gauged the accessibility of existing OERs. This study seeks to fill that gap by evaluating the accessibility of a random sample of 355 open textbooks using a rubric based on WCAG 2.1, primarily at the A and AA levels. Our findings suggest that OER creators are, in general, not adhering to common accessibility best practices. Only two books in the sample complied with all of the rubric categories that applied to them. Each of the four format types we encountered – HTML/website, PDFs, Word documents and EPUBs – failed to comply with the criteria in many of the same rubric categories, though PDFs had a slightly higher average and median number of fails. The criteria the books failed to uphold most often, such as adding alternative text to images, are key to accessibility. Author training and support, as well as the promotion of more accessibility-friendly tools, formats and platforms, are possible ways forward.

Our rubric can serve as a tool for educators looking for OERs to adopt. Though limited in scope, it could be further refined and used as an accessibility checklist for OER creators. Moreover, future studies could replicate or extend our evaluation to paint a fuller picture of existing OERs’ accessibility.

Data accessibility statement

Data generated during the course of this project are available within the Harvard Dataverse Repository at https://doi.org/10.7910/DVN/UU2TGW.