In March, WebAIM published their annual accessibility report. A number of well-informed folks have read the report and written articles with their key takeaways.
Manuel Matuzović picks up on one figure in his post, “50.1% empty links“. The number of websites containing links with no text (usually when linking an image that lacks alt text) has risen by 0.4%. He tests out various screen readers on an ’empty link image’ and documents the result, which is universally garbage, albeit with some differences.
Manuel concludes that you should “test your sites at least with an automatic testing tool like axe, Lighthouse, or Wave, and label linked graphics. I’ve described several ways in “Buttons and the Baader–Meinhof phenomenon.”“.
In “We need accessibility action“, Eric Eggert goes deeper. He notes that WebAIM’s tests are all automated (and test home pages only), so show us trends but only find a subset of accessibility barriers. Highlights:
- Average errors per page went slightly down (to 50). 96.3% of all pages have easily detectable WCAG failures – this is down 1.5% in four years.
- ARIA usage has increased a lot, and “pages that use ARIA are more likely to be inaccessible”.
- 96.1% of all errors are in one or more of the following categories:
- Low contrast text
- Missing alternative text for images
- Empty links
- Missing form input labels
- Empty buttons
- Missing document language
- As noted on the report itself, “Addressing just these few types of issues would significantly improve accessibility across the web.”
Eric finds these figures “embarrassing”. These WCAG requirements are not new – they’ve “all been around since WCAG 1.0” which is now 24 years old. For 96% of websites to still have issues underlines the need for “better strategies to educate people about the issues”.
Eric suggests that browsers themselves could fix some issues – “a ramp built into a train will generally be more available than a situation where every stop needs to provide its own”. Mitigation for low contrast text could be built into the browser, for example.
As for ARIA: “essential ARIA functionality must be transferred into HTML. ARIA needs to be a specialist tool that you only get out if you don’t have any other options. Many of the ARIA techniques are very intricate and for 90% of developers they should never be exposed to that kind of complexity and power.”
Eric concludes that the release of WCAG 3 won’t necessarily help. We have standards already, and people are unable or unwilling to follow them. “In the best case, web accessibility will drag on. In the worst case, we will have multiple standards to follow that have entirely unique ideas of how to test and measure accessibility”.
This landed in my inbox only recently (despite being published in April last year). I remember Jamie from my days at the BBC, so it’s always nice to find out how people are doing!
One of the worst things speakers can do to ruin an attendee’s experience is to make assumptions. Jamie is “semi-speaking” and an unexpected demand to speak can be difficult – so don’t assume someone can speak at a moment’s notice. Another assumption is that someone can move. Jamie needs a harness to sit upright, “so for that reason, a 10-minute ‘break’ often means sitting alone for 10 minutes as that’s not enough time to unstrap, transfer and move anywhere”.
The rest of the interview is largely focussed on how speakers can help their audience to focus. Visual structure for slides is important: “slide numbers, visually indicated sections (colours, icons etc), and coming back to something consistent at the end of each section”. This allows audiences to pace themselves and follow the narrative. Most people will be relying on at least two of three means of access: spoken words, visuals, and something textual or signed. Finally, events should “be joyful. The most engaging accessible content treats any topic in a playful way”.
Jamie ends with “Assumptions are the root cause of most barriers. If you keep on top of the assumptions, then most of the barriers can be avoided.”
Eric Bailey writes a comprehensive article on why you should never, ever provide custom styling for your website’s scrollbars.
The post begins somewhat philosophically: Eric highlights the area of a browser window that is your responsibility (the web page) and then highlights what isn’t (the browser ‘furniture’, URL display, and yes, the scrollbar).
But it’s more than just ideological – people who use Windows themes or Forced Colors Mode / High Contrast Mode may be doing so for aesthetical reasons or because they have accessibility requirements. By overriding their choices, you’re potentially excluding them from being able to use your scrollbar. Windows, Eric reminds us, is incredibly popular – something that developers using shiny MacBooks can sometimes forget.
It is possible to set scrollbar width to 1px – the browser won’t stop you. This is obviously a bad idea. Eric’s point is that by taking on styling for the scrollbar, you’re taking on its WCAG requirements too. It’s now on you to ensure it has a large enough touch area, a high enough contrast, and so on. Eric evaluated a number of scrollbar-code-generators and none of them accounted for this stuff.
Modifying a scrollbar’s visuals breaks external consistency. “Digital literacy is also a spectrum. When digital things don’t look or behave the way they are expected to, people tend to internalize it as a personal failure—they broke something, they’ve been hacked, they’re being spied on, etc”.
Eric’s key takeaway is “maybe you should write less code and in doing so allow more people do more things”.
Interesting experiment by Mark Steadman, who used Durable to generate a website.
Using axe-core to scan the site, he found 17 accessibility issues, of which 11 were ‘critical’ and 6 ‘serious’. These included:
- All form elements in “Contact us” section were missing labels
- The carousel buttons were missing an accessible label
- All the images in the “Gallery” were missing alt text
Manually discovered a11y issues included keyboard focus indicators being highly inconsistent – some elements would get it, some would not – and an entire set of images in the gallery being totally inaccessible to the keyboard.
Screen reader support was quite poor too, with multiple links and buttons having the same text, as well as images with poor alt text such as “banner image”.
Mark’s conclusion is fairly positive. AI is here to stay, and the output here is not as bad as it could be (Mark likes the use of native HTML, some practical use of ARIA, and “great” resize support). Mark hopes that one day AI could generate fully accessible websites, but fears that that day won’t come any time soon.
This is just a bit of fun from Reddit. Engineers competed to see who could build the worst possible UI.
- A “Delete your account” button hidden behind three red cups that shuffle like a magic trick. Good luck finding the right one!
- A volume control slider that doesn’t stop at 100% (the sider will just fall off the end and break)
- An “unsubscribe” button next to a fan, which ‘blows’ your mouse cursor away when you try to click the button
- “Real dark mode” – everything is pitch black except for a few pixels of light emanating from your cursor
…the list goes on. Images and videos demonstrating the designs are in the article.
A Deep Dive into Accessibility APIs
I’ve read this three-part series by Neill Hadder, who works at Knowbility. Below is a perhaps oversimplified summary – I’d encourage you to click through to the articles themselves if you’re keen to learn more!
This is an introductory article that still goes into a fair bit of depth, explaining the history of the document object model (DOM) following the earlier Windows component object model (COM) and Mac’s OS X Coco API. The ‘accessibility tree’ is a web document object whose children include all the accessible objects from the DOM. Some elements, such as SVG, are omitted from the tree, unless they’re given an explicit role.
When something important happens, e.g. the display of new content, it’s up to the application to post an event notification to the platform API. Assistive tech (AT) registers what type of events it wants to listen for. The same responsibilities can work the other way too, where AT sends actions to the application. Passing messages between running applications is called inter-process communication (IPC).
This article introduces the off-screen model (OSM), which is the idea of intercepting low-level drawing instructions from applications (and the operating system itself) “in order to build a database with which the screen reader could interact as if it were text mode”.
The first OSM program was OutSpoken for Mac, released in 1989 as a kind of “virtual white cane”. It used the numeric keypad to emulate a mouse to visualise and explore the screen layout. Mac’s next AT was VoiceOver, in 2005.
Meanwhile a number of mostly short-lived screenreaders were created for Windows. “Microsoft created the IAccessible specification, which put in place the now-familiar tree structure of semantic information about UI objects”.
Neill dives further into the inner workings of OSMs. Here’s a taster:
The work of an OSM is extremely complex. It starts with reading Strings of ASCII characters from text-drawing functions. The OSM also needs to keep track of the bitmap in order to insert text at the right place in the OSM when, for example, typed characters are being drawn on a line one instruction at a time as part of the same word. It has to keep track of what’s visible or not, too, such as when areas on the bitmap are transferred off screen and replaced in order to convey the illusion of 3D pull-down menus or layered windows
Evidently, this model wasn’t sustainable. “Developers had to slowly add special code for each application that didn’t exclusively use standard UI elements”. Misrecognitions, memory leaks and accumulated garbage were an issue.
Windows screen readers very early hit upon a terrific strategy for highly-efficient web page review that has endured, largely unchanged, for over twenty years. It centers around creating a buffered copy of a web page that the user can review like a standard text document.
Or, put a different way: “the one thing that screen readers no longer do is read the screen”.
Screen readers have two modes. “Browse mode” allows users to jump along headings, lists, tables and so on, and also to use virtual cursor movement (not unlike the “caret browsing mode” that is built into major browsers, and which I hadn’t heard of until today!). To interact with most controls on the page, screen reader users switch to something historically known as “forms mode”.
“Screen reader access to page semantics came all at once with Internet Explorer 5’s Microsoft Active Accessibility (MSAA) support”. MSAA later lacked the vocabulary for all the new control types being added into HTML: this is where ARIA comes in.
MSAA also lacked dynamic change support, for anything that happened after the initial page load. “One work-around was the introduction of a screen reader hotkey to refresh the virtual buffer” – this only worked intermittently.
Microsoft introduced its newer UIA accessibility API in 2006 with Windows Vista. In late 2006, the IAccessible2 API arrived, a platform-independent open standard developed by IBM, working closely with screen reader developers and corporate stakeholders. Unlike UIA, IAccessible2 extended MSAA’s IAccessible code library to fix its shortcomings. Firefox quickly implemented it alongside its existing MSAA support, while Google Chrome followed suit in the early 2010s. Meanwhile, Internet Explorer would ultimately rely on a scaled-down version of UIA. IAccessible2, which is what JAWS and NVDA use today, is not a Windows platform API: the libraries are part of the browser.
IPC (from part 1) is the secure, reliable means of handing info back and forth between applications through an operating system API. Low-level hooks, on the other hand, are effectively ‘code injection’, insofar as AT has forced some of its code to run inside the other application’s space. ATs are basically the only non malicious programs that use this technique.
IAccessible2 allowed screen reader developers “direct access to the browser’s API implementation” using low level hooks:
When a web page loads, JAWS and NVDA need to go through every element on the page to create a virtual buffer. If they were to use IAccessible2 only through IPC, then they’d have to send many, many messages back and forth between the screen reader and browser processes; and, even as fast as computers are, that’s relatively slow. But with code injection, some of the screen reader’s code can run directly inside the browser, gather all the information it needs for a virtual buffer (which requires some complex logic specific to the screen reader), then communicate back to the main screen reader process at the end.
However, “Apple and Google operating systems don’t allow code injection. Windows-based Firefox and Chrome increasingly keep their doors locked while continuing to give assistive technology a pass. [Code injection’s] days are numbered.”. There is little incentive for screen reader developers to migrate all of their code from low level hooks to IPCs, especially as this can cause significant slowdown. Neill suggests the developers may need help from browser developers or Microsoft.
As for the current state of play, taken more or less verbatim from the article:
- Windows screen readers rely on MSAA, as well as a few other Windows APIs, in older areas of Windows like the desktop and taskbar, while UI Automation provides access to components added since Windows 8.
- JAWS and NVDA use IAccessible2 in Chrome, Firefox, and Chromium-based Edge. They additionally use ISimpleDOM when they need information not able to be plucked from the accessibility tree. These are code libraries incorporated into the browsers, not Windows.
- Both Firefox and Chrome have more or less ignored UI Automation for all this time. The Edge accessibility team have contributed their UIA implementation to Chromium, but it’s still not turned on by default in Chrome.
- Microsoft incorporated a bridge that allows ATs that rely on UIA in web browsers (Narrator) to communicate with applications that use IAccessible2 (Chrome and Firefox). This bridge continues to interact with ATs solely through IPC but injects its code into the browser whenever possible for the performance boost. This is what’s happening under the hood when using Narrator in those browsers. On the other hand, Narrator predictably uses UIA in Microsoft Edge.
Neill concludes “Mac, IOS, and Android all implement their platform APIs throughout their systems, including third-party browsers. If VoiceOver began to support IAccessible2 or UIA, other Mac and IOS browsers would be ready. What seems likely is that Windows will sooner or later fall in line with other operating systems by shutting down third-party code injection. Screen reader developers will then be forced to undertake the [work to replace hooks with IPCs], and everyone will indeed use the Windows platform API, the performance of which will by then very likely be up to the task”.
Did you know that you can subscribe to dai11y, week11y, fortnight11y or month11y updates! Every newsletter gets the same content; it is your choice to have short, regular emails or longer, less frequent ones. Curated with ♥ by developer @ChrisBAshton.