month11y issue 39

Welcome to your monthly frequent11y newsletter, brought to you by @ChrisBAshton. I hope you enjoy these a11y articles I’ve collated and summarised for you. (Psst – if you find these emails too long, consider switching to shorter, more frequent updates). Now on with the show!

Everyone Watches TV with Subtitles Now. How’d That Happen?

I thought this article was interesting to call out, as in my experience we still often assume captions should be an ‘opt in’ thing. But perhaps things have pivoted to them being ‘opt out’. Indeed, some captions are now ‘on’ by default on social media.

In a 2022 survey of 1,200 people, language learning company Preply determined that 50 percent of Americans used subtitles and closed captions the vast majority of the time they watch content.

According to Preply, 57 percent of all Americans watch shows or movies or videos in public on their mobile devices, but a very significant 74 percent of Gen Z do the same. Even if you’re (hopefully) using headphones while in public, it’s likely you’re getting poor audio quality… subtitles are practically a necessity.

The article also cites ‘habit’ as one of the main reasons. People are simply more used to using subtitles; the article cites that in 2020, nearly 30 percent of on-demand streaming was for non-US shows.

Finally, there are some modern technical issues that could explain why hearing dialogue is more difficult:

For some, the problem is the design of modern televisions; the majority of which place internal speakers at the bottom of the set instead of facing towards the audience, causing significantly worse audio quality. Other issues are caused by sound designs optimized for theatrical experiences, which can result in compressed audio when translated to home.

Become an accessibility champion by using simple mockup annotations

“Accessibility annotations” detail interaction design behaviours in UI designs. They communicate the intended experience for keyboard and screen reader users, allowing developers to implement the design just as they would implement a visual one. This speeds up the design QA and development process, but also ensures designers have thought about accessible interactions from the start.

The article shows screenshots of some annotated designs, with symbols clarifying the intended focus tab order of a component, or call-outs to screen-reader-specific text. Even the underlying semantics – whether a list should be ordered or unordered, for example – are clarified with markup annotations.

There is also a Jira ticket template “to facilitate design accessibility review discussions while reviewing the visual mockups”.

The article links to Deque’s downloadable accessibility annotations toolkit, which has files for Adobe Illustrator and Sketch as well as individual SVGs.

The difference between Increased Contrast Mode and Windows High Contrast Mode (Forced Colours Mode)

Martin Underhill describes Increased Contrast Mode (ICM) – a setting that users can opt into, and which website designers/developers can accommodate through a media query:

@media (prefers-contrast: more) {
    /* High contrast styling goes here */

Not a lot of people know about this, or they don’t have it high on their list of priorities. It’s utterly reliant on the website creator to support.

Windows High Contrast Mode (WHCM), on the other hand, is a ‘forced colours mode’. It doesn’t rely on the website creator, and instead forces the chosen theme onto the website. It has a well supported media query like ICM, but only allows a handful of carefully selected things to be styled:

@media (forced-colors: active) {
    /* WHCM styling goes here */
    /* Limited to things like text colour, background colour and keyboard focus outline colour */

Martin has made the decision to not attempt to define styles for WHCM users, instead delegating this to the user’s operating system. It should mean they have a more consistent/familiar experience, and it will automatically support their choice of dark or light theme.

Don’t meddle with user input

Martin Underhill warns against designs that manipulate user input, and the maxlength attribute in particular, which prevents users from typing anything beyond a certain character limit.

Whilst these features are often implemented with the best of intentions – to assist the user in inputting data in the correct format, for example – they’re not very accessible solutions.

Martin references the GOV.UK Design System’s Date Input component, which expects a format like 27 3 2007 but does not prevent users from typing longer numbers (or indeed typing a-z characters). Someone raised a GitHub issue to discuss adding maxlength to the fields, but Hanna Laakso responds:

If you use maxlength attribute on a form field to limit user input, users might not receive appropriate feedback of the limit. For instance, the user might not notice that not all the information they entered appeared in the form field … It is generally better to let users enter their information in a way that suits them and allow them to submit the form.=

Instead, using hint text to nudge the user in the right direction, alongside validation to tell the user how to fix the problem in any particular form field, is the suggested way forward. Martin notes that “this should also help satisfy 3.3.2 Labels or Instructions and 3.3.3 Error Suggestion, respectively, from the Web Content Accessibility Guidelines (WCAG)”.

Toggles suck!

This somewhat inflammatory headline by AxessLab leads into a very long but enjoyable article about toggle design. The author argues that real world toggles (light switches) work well because:

  1. It’s clear whether it’s worked, via the context (the light immediately comes on)
  2. One can press the switch as many times as one wants, and have the same outcome (i.e. pushing down extra hard on the switch to make sure something comes on).

In the digital world, the design doesn’t carry over so well. For setting things like cookie preferences, there is no obvious visual feedback to enabling/disabling the setting. And activating the switch will flutter back and forth between ‘on’ and ‘off’.

There are lots of illustrations and screenshots in the article, highlighting that it’s often unclear what state a toggle is in. Designs vary, and whilst “most western designers seem to assume that “right = active””, that’s not always the case.

It is possible to make accessible toggles. The author links to articles by industry heavyweights: Heydon Pickering’s article on Toggle Buttons, an article on Toggle Switch design by Sara Soueidan, and Under Engineered Toggles by Adrian Roselli.

The author also cites examples of toggles done well. They show a screenshot of a toggle for filtering on Airbnb, where a toggle button limits the amount of available houses. “Users can understand by the reduced amount of houses that the filter is active, even if they don’t understand the toggle control. The number in the button is physically close to the toggle control, and many users will notice the change as it happens and draw the correct conclusion from context”.

But whilst Airbnb is a big name, other big names are content to not use toggles. The author shows screenshots of Amazon using plain old radio buttons, and Slack using plain old buttons. The author concludes with advice to “just use a checkbox or radio group”.

There were a few other bits of interesting info tangential to the theme of toggles.

“In our user tests, a majority of users will assume an empty text field is actually mandatory and expect an input to be made. So much so that we even recommend you to not use asterisks * to indicate mandatory fields, and instead mark optional fields as “optional”.”

Indeed, “the “filled in = active / touched / done” pattern is so well recognized that if a text input field is not empty, many users will believe it’s done and requires no more work. That’s why you should try to avoid placeholder text and labels that are positioned inside the input field like the commonly used Material UI component, even if it “moves away” on focus.”

A worthwhile read!

The only accessibility specialist in the room

It’s hard being the only one in your organisation or team responsible for accessibility. If that sounds familiar, I salute you, and this one’s for you.

This article by Henny Swan might resonate with some of you. Henny has some advice:

  1. You are not the only person responsible for accessibility. You may be the only person with “accessibility” in your job title, but it’s everyone’s responsibility, from CEOs to content editors, designers and developers.
  2. Your role is as much about relationships as it is about accessibility. Look for the managers and decision makers who can make things happen, and look for the designers, developers, editors and testers that want to get into accessibility: they can be powerful advocates.
  3. Find ways to scale. Think about what questions you’re asked the most, and document your answers in shared spaces (Confluence, wiki, design system documentation, etc). Document processes that can be followed within teams, e.g. reviewing designs for accessibility, triaging a11y issues or writing user stories for accessibility.
  4. You can’t know everything. Tell people you will need to go and research something, and come back with options for them to consider. Get a consultant if specialist knowledge is needed. If you need a budget, invest some time in writing a business case for accessibility.
  5. Build a support network. Set up an a11y Slack channel. Join the Champions of Accessibility Network (CAN) on LinkedIn, and the WebAIM discussion list. Consider getting a mentor, e.g. at Accessible Community.

WebAIM Million Report 2023

In March, WebAIM published their annual accessibility report. A number of well-informed folks have read the report and written articles with their key takeaways.

Manuel Matuzović picks up on one figure in his post, “50.1% empty links“. The number of websites containing links with no text (usually when linking an image that lacks alt text) has risen by 0.4%. He tests out various screen readers on an ’empty link image’ and documents the result, which is universally garbage, albeit with some differences.

Manuel concludes that you should “test your sites at least with an automatic testing tool like axe, Lighthouse, or Wave, and label linked graphics. I’ve described several ways in “Buttons and the Baader–Meinhof phenomenon.”“.

In “We need accessibility action“, Eric Eggert goes deeper. He notes that WebAIM’s tests are all automated (and test home pages only), so show us trends but only find a subset of accessibility barriers. Highlights:

  • Average errors per page went slightly down (to 50). 96.3% of all pages have easily detectable WCAG failures – this is down 1.5% in four years.
  • ARIA usage has increased a lot, and “pages that use ARIA are more likely to be inaccessible”.
  • 96.1% of all errors are in one or more of the following categories:
    • Low contrast text
    • Missing alternative text for images
    • Empty links
    • Missing form input labels
    • Empty buttons
    • Missing document language
  • As noted on the report itself, “Addressing just these few types of issues would significantly improve accessibility across the web.”

Eric finds these figures “embarrassing”. These WCAG requirements are not new – they’ve “all been around since WCAG 1.0” which is now 24 years old. For 96% of websites to still have issues underlines the need for “better strategies to educate people about the issues”.

Eric suggests that browsers themselves could fix some issues – “a ramp built into a train will generally be more available than a situation where every stop needs to provide its own”. Mitigation for low contrast text could be built into the browser, for example.

Other errors, like empty links and lack of alt text, really require developer intervention – and Eric argues that such errors should be highlighted in the browser’s console error messages. “There is no reason why JavaScript programming errors trigger messages, and accessibility issues do not. Tooling is bizarrely oblivious to accessibility.”

As for ARIA: “essential ARIA functionality must be transferred into HTML. ARIA needs to be a specialist tool that you only get out if you don’t have any other options. Many of the ARIA techniques are very intricate and for 90% of developers they should never be exposed to that kind of complexity and power.”

Eric concludes that the release of WCAG 3 won’t necessarily help. We have standards already, and people are unable or unwilling to follow them. “In the best case, web accessibility will drag on. In the worst case, we will have multiple standards to follow that have entirely unique ideas of how to test and measure accessibility”.

Interview with Jamie Knight (and Lion)

This landed in my inbox only recently (despite being published in April last year). I remember Jamie from my days at the BBC, so it’s always nice to find out how people are doing!

Jamie has autism and mobility issues. Denis Boudreau interviewed Jamie about the accessibility of virtual and in-person events.

One of the worst things speakers can do to ruin an attendee’s experience is to make assumptions. Jamie is “semi-speaking” and an unexpected demand to speak can be difficult – so don’t assume someone can speak at a moment’s notice. Another assumption is that someone can move. Jamie needs a harness to sit upright, “so for that reason, a 10-minute ‘break’ often means sitting alone for 10 minutes as that’s not enough time to unstrap, transfer and move anywhere”.

The rest of the interview is largely focussed on how speakers can help their audience to focus. Visual structure for slides is important: “slide numbers, visually indicated sections (colours, icons etc), and coming back to something consistent at the end of each section”. This allows audiences to pace themselves and follow the narrative. Most people will be relying on at least two of three means of access: spoken words, visuals, and something textual or signed. Finally, events should “be joyful. The most engaging accessible content treats any topic in a playful way”.

Jamie ends with “Assumptions are the root cause of most barriers. If you keep on top of the assumptions, then most of the barriers can be avoided.”

Don’t use custom CSS scrollbars

Eric Bailey writes a comprehensive article on why you should never, ever provide custom styling for your website’s scrollbars.

The post begins somewhat philosophically: Eric highlights the area of a browser window that is your responsibility (the web page) and then highlights what isn’t (the browser ‘furniture’, URL display, and yes, the scrollbar).

But it’s more than just ideological – people who use Windows themes or Forced Colors Mode / High Contrast Mode may be doing so for aesthetical reasons or because they have accessibility requirements. By overriding their choices, you’re potentially excluding them from being able to use your scrollbar. Windows, Eric reminds us, is incredibly popular – something that developers using shiny MacBooks can sometimes forget.

It is possible to set scrollbar width to 1px – the browser won’t stop you. This is obviously a bad idea. Eric’s point is that by taking on styling for the scrollbar, you’re taking on its WCAG requirements too. It’s now on you to ensure it has a large enough touch area, a high enough contrast, and so on. Eric evaluated a number of scrollbar-code-generators and none of them accounted for this stuff.

Modifying a scrollbar’s visuals breaks external consistency. “Digital literacy is also a spectrum. When digital things don’t look or behave the way they are expected to, people tend to internalize it as a personal failure—they broke something, they’ve been hacked, they’re being spied on, etc”.

Eric’s key takeaway is “maybe you should write less code and in doing so allow more people do more things”.

I Made a Site Leveraging AI: How Accessible Was It?

Interesting experiment by Mark Steadman, who used Durable to generate a website.

Using axe-core to scan the site, he found 17 accessibility issues, of which 11 were ‘critical’ and 6 ‘serious’. These included:

  • All form elements in “Contact us” section were missing labels
  • The carousel buttons were missing an accessible label
  • All the images in the “Gallery” were missing alt text

Manually discovered a11y issues included keyboard focus indicators being highly inconsistent – some elements would get it, some would not – and an entire set of images in the gallery being totally inaccessible to the keyboard.

Screen reader support was quite poor too, with multiple links and buttons having the same text, as well as images with poor alt text such as “banner image”.

Mark’s conclusion is fairly positive. AI is here to stay, and the output here is not as bad as it could be (Mark likes the use of native HTML, some practical use of ARIA, and “great” resize support). Mark hopes that one day AI could generate fully accessible websites, but fears that that day won’t come any time soon.

The ultimate inaccessible UI components

This is just a bit of fun from Reddit. Engineers competed to see who could build the worst possible UI.

Highlights include:

  • A “Delete your account” button hidden behind three red cups that shuffle like a magic trick. Good luck finding the right one!
  • A volume control slider that doesn’t stop at 100% (the sider will just fall off the end and break)
  • An “unsubscribe” button next to a fan, which ‘blows’ your mouse cursor away when you try to click the button
  • “Real dark mode” – everything is pitch black except for a few pixels of light emanating from your cursor

…the list goes on. Images and videos demonstrating the designs are in the article.

A Deep Dive into Accessibility APIs

I’ve read this three-part series by Neill Hadder, who works at Knowbility. Below is a perhaps oversimplified summary – I’d encourage you to click through to the articles themselves if you’re keen to learn more!

Part 1: Swinging Through the Accessibility Tree Like a Ring-Tailed Lemur

This is an introductory article that still goes into a fair bit of depth, explaining the history of the document object model (DOM) following the earlier Windows component object model (COM) and Mac’s OS X Coco API. The ‘accessibility tree’ is a web document object whose children include all the accessible objects from the DOM. Some elements, such as SVG, are omitted from the tree, unless they’re given an explicit role.

When something important happens, e.g. the display of new content, it’s up to the application to post an event notification to the platform API. Assistive tech (AT) registers what type of events it wants to listen for. The same responsibilities can work the other way too, where AT sends actions to the application. Passing messages between running applications is called inter-process communication (IPC).

Part 2: The Road to Good Intentions Is Paved with Hell

This article introduces the off-screen model (OSM), which is the idea of intercepting low-level drawing instructions from applications (and the operating system itself) “in order to build a database with which the screen reader could interact as if it were text mode”.

The first OSM program was OutSpoken for Mac, released in 1989 as a kind of “virtual white cane”. It used the numeric keypad to emulate a mouse to visualise and explore the screen layout. Mac’s next AT was VoiceOver, in 2005.

Meanwhile a number of mostly short-lived screenreaders were created for Windows. “Microsoft created the IAccessible specification, which put in place the now-familiar tree structure of semantic information about UI objects”.

Neill dives further into the inner workings of OSMs. Here’s a taster:

The work of an OSM is extremely complex. It starts with reading Strings of ASCII characters from text-drawing functions. The OSM also needs to keep track of the bitmap in order to insert text at the right place in the OSM when, for example, typed characters are being drawn on a line one instruction at a time as part of the same word. It has to keep track of what’s visible or not, too, such as when areas on the bitmap are transferred off screen and replaced in order to convey the illusion of 3D pull-down menus or layered windows

Evidently, this model wasn’t sustainable. “Developers had to slowly add special code for each application that didn’t exclusively use standard UI elements”. Misrecognitions, memory leaks and accumulated garbage were an issue.

Part 3: Your Browser May Be Having a Secret Relationship with a Screen Reader

Windows screen readers very early hit upon a terrific strategy for highly-efficient web page review that has endured, largely unchanged, for over twenty years. It centers around creating a buffered copy of a web page that the user can review like a standard text document.

Or, put a different way: “the one thing that screen readers no longer do is read the screen”.

Screen readers have two modes. “Browse mode” allows users to jump along headings, lists, tables and so on, and also to use virtual cursor movement (not unlike the “caret browsing mode” that is built into major browsers, and which I hadn’t heard of until today!). To interact with most controls on the page, screen reader users switch to something historically known as “forms mode”.

“Screen reader access to page semantics came all at once with Internet Explorer 5’s Microsoft Active Accessibility (MSAA) support”. MSAA later lacked the vocabulary for all the new control types being added into HTML: this is where ARIA comes in.

MSAA also lacked dynamic change support, for anything that happened after the initial page load. “One work-around was the introduction of a screen reader hotkey to refresh the virtual buffer” – this only worked intermittently.

Microsoft introduced its newer UIA accessibility API in 2006 with Windows Vista. In late 2006, the IAccessible2 API arrived, a platform-independent open standard developed by IBM, working closely with screen reader developers and corporate stakeholders. Unlike UIA, IAccessible2 extended MSAA’s IAccessible code library to fix its shortcomings. Firefox quickly implemented it alongside its existing MSAA support, while Google Chrome followed suit in the early 2010s. Meanwhile, Internet Explorer would ultimately rely on a scaled-down version of UIA. IAccessible2, which is what JAWS and NVDA use today, is not a Windows platform API: the libraries are part of the browser.

IPC (from part 1) is the secure, reliable means of handing info back and forth between applications through an operating system API. Low-level hooks, on the other hand, are effectively ‘code injection’, insofar as AT has forced some of its code to run inside the other application’s space. ATs are basically the only non malicious programs that use this technique.

IAccessible2 allowed screen reader developers “direct access to the browser’s API implementation” using low level hooks:

When a web page loads, JAWS and NVDA need to go through every element on the page to create a virtual buffer. If they were to use IAccessible2 only through IPC, then they’d have to send many, many messages back and forth between the screen reader and browser processes; and, even as fast as computers are, that’s relatively slow. But with code injection, some of the screen reader’s code can run directly inside the browser, gather all the information it needs for a virtual buffer (which requires some complex logic specific to the screen reader), then communicate back to the main screen reader process at the end.

However, “Apple and Google operating systems don’t allow code injection. Windows-based Firefox and Chrome increasingly keep their doors locked while continuing to give assistive technology a pass. [Code injection’s] days are numbered.”. There is little incentive for screen reader developers to migrate all of their code from low level hooks to IPCs, especially as this can cause significant slowdown. Neill suggests the developers may need help from browser developers or Microsoft.

As for the current state of play, taken more or less verbatim from the article:

  • Windows screen readers rely on MSAA, as well as a few other Windows APIs, in older areas of Windows like the desktop and taskbar, while UI Automation provides access to components added since Windows 8.
  • JAWS and NVDA use IAccessible2 in Chrome, Firefox, and Chromium-based Edge. They additionally use ISimpleDOM when they need information not able to be plucked from the accessibility tree. These are code libraries incorporated into the browsers, not Windows.
  • Both Firefox and Chrome have more or less ignored UI Automation for all this time. The Edge accessibility team have contributed their UIA implementation to Chromium, but it’s still not turned on by default in Chrome.
  • Microsoft incorporated a bridge that allows ATs that rely on UIA in web browsers (Narrator) to communicate with applications that use IAccessible2 (Chrome and Firefox). This bridge continues to interact with ATs solely through IPC but injects its code into the browser whenever possible for the performance boost. This is what’s happening under the hood when using Narrator in those browsers. On the other hand, Narrator predictably uses UIA in Microsoft Edge.

Neill concludes “Mac, IOS, and Android all implement their platform APIs throughout their systems, including third-party browsers. If VoiceOver began to support IAccessible2 or UIA, other Mac and IOS browsers would be ready. What seems likely is that Windows will sooner or later fall in line with other operating systems by shutting down third-party code injection. Screen reader developers will then be forced to undertake the [work to replace hooks with IPCs], and everyone will indeed use the Windows platform API, the performance of which will by then very likely be up to the task”.

Whew, that was a long newsletter! Did you know that you can subscribe to smaller, more frequent updates? The dai11y, week11y and fortnight11y newsletters get exactly the same content. The choice is entirely up to you! Curated with ♥ by developer @ChrisBAshton.