month11y issue 15

Welcome to your monthly frequent11y newsletter, brought to you by @ChrisBAshton. I hope you enjoy these a11y articles I’ve collated and summarised for you. (Psst – if you find these emails too long, consider switching to shorter, more frequent updates). Now on with the show!

Lists

  • A Jeremy Keith entry from his journal. Lists are helpfully announced to screen readers when they are navigated to (e.g. “List: six items”). However, Webkit browsers such as Safari don’t announce lists if the lists’ bullets have been removed using CSS (just like it doesn’t announce content that has been visually hidden with display: none). There’s a Twitter thread explaining why, but it boils down to this: “If a sighted user doesn’t need to know it’s a list, why would a screen reader user?”
  • If you’ve removed bullets but your content is a list (you may have used some visual replacement for bullets, e.g. image markers), you can force screen readers to treat your content as a list by adding role="list".
  • There’s an interesting point about “pixel perfection” across browsers, too. It’s widely considered to be an unattainable or undesirable goal nowadays; why should we demand the aural equivalent? Websites don’t need to sound identical in every screen reader.

VoiceOver Preview for macOS Firefox

  • Mozilla have worked hard over the past year to deliver VoiceOver support for Firefox on macOS – something that had been lacking for 15 years. It’s now ready to try in the Firefox 85 Beta and Mozilla are calling on volunteers to try it out and report any bugs they encounter. Other than a few known issues, it is hoped to be fairly stable.

Equal Entry Guidelines for Describing 360-Degree Video

  • An interesting set of guidelines describing the challenges of audio-describing a 360 degree video (which will become more prevalent as VR grows). You should divide the video into scenes, write a brief introductory description for each scene, then write audio descriptions for each direction a viewer could face during the scene. Consider ‘forward’, ‘left’, ‘right’, ‘backward’, ‘up’ and ‘down’ views. See the demo on YouTube.

Below I summarise not one, not two, but three articles. It’s my attempt to clarify what seems quite a contradictory issue: whether you should ‘try on’ a disability to build empathy, and/or build better products and services. As someone who has written a series of articles on using the web under various constraints, it’s a subject close to my heart and an important conversation to have.

Article 1: Why I won’t “try on” disability to build empathy in the design process (and you should think twice about it.)

  • Amelia Abreu describes how accessibility workshops that, for example, have able-bodied participants trying to navigate a high street with wheelchairs to gain awareness of shortage of ramps, can be counter-productive. A research paper concluded that short-term mimicking of the effects of a disability can a) result in fear, apprehension and pity, b) fail to account for diverse coping mechanisms people develop over time, and therefore, c) cause participants to underestimate the true capabilities of persons with disabilities.
  • Instead, Amelia suggests we build relationships with real people with disabilities. Get to know their diverse interests and accessibility concerns, and ask how you can be an ally for disability rights. Also, to draw upon your own experiences. In the wheelchair example, Amelia developed an awareness of the inaccessibility of infrastructure when she had to take her daughter around in a stroller.

Article 2: Going Colorblind: An Experiment in Empathy and Accessibility

  • This article appeared a month before the first, and on the same website! Sara Novak describes her colleague Peter’s deuteranomalous colour blindness, affecting 5% of men. It means he has a hard time differentiating greens from other colours. Sara admits she was sympathetic, rather than empathetic, so talked to Peter and decided to see what it’s like to be colourblind for three days, using Chrome extension See.
  • Sara realised she’d been colour-coding her responses in emails, and that this was difficult to decipher. She realised why Peter bolded important info in emails rather than rely on colour, and she started to do the same. She also encountered inaccessible web forms which used colour alone to convey error state.

Article 3: Get the Funkify Out: A Neat Accessibility Tool/Disability Simulator

  • Michael Larsen writes about the “Funkify Disability Simulator” Chrome extension, which attempts to simulate what it’s like to browse the web with dyslexia, astigmatism, jittery hands and high distraction (much like GDS’s own accessibility personas). With it, Michael was able to create a custom profile that makes a page look “very much like it would without my reading glasses”.

In conclusion, I’m not convinced I have a definitive answer. These were all useful articles and I learned something from each, but this is still a topic on which I’m uneasy and am keen to keep learning about.

Can I use a screen reader and know exactly what it’s like to be a blind person? No, of course not – there are all manner of lived differences.

Can I use a screen reader to test my product? Yes, of course – without testing, we’ve no hope of finding and fixing accessibility issues.

Can I use a screen reader to build empathy? This is more complex. In Sara’s case, it seems she did build empathy for her colourblind colleague through use of a simulator. Perhaps the key is that she didn’t empathise in isolation; she was engaged with Peter and able to ask questions and compare her simulated world view with his. In contrast, I can see how a first time screen reader user with no point of reference could be overwhelmed and unable to navigate, and in turn develop a misguided view of what a blind person is capable of doing.

The articles above were hand-picked from various accessibility newsletters I’m subscribed to. If there are other articles that you recommend, please do send them my way!


State-Switch Controls: The Infamous Case of the “Mute” Button

  • An article exploring the design of ‘mute’ buttons on the iPhone ‘call’ screen, on Zoom, and on WebEx. Two of the three use fill colour alone to denote state: the universal microphone icon has a dash through it, regardless of what state you’re in, making it difficult to know whether your microphone is currently muted. Zoom is the one that gets it right, as it removes the dash from the microphone when your mic is active, and has a label on the button to indicate what will happen when you press it.
  • Aside: I still struggle with Zoom’s implementation, and have yet to find one that doesn’t confuse! Perhaps the best I’ve seen is Google Hangouts’ version, but that could just be down to familiarity as I use it every day.

WordPress adds support for video captions and subtitles

  • WordPress v5.6 “Simone” introduces WebVTT support for its videos. This is a big deal considering WordPress powers around 4 out of 10 websites. It means you can upload .vtt files containing subtitles, to enable closed captions on the video. The article gives a nice example of a VTT file, which is just text formatted in a particular way.
  • Many WordPress hosting providers aren’t actually well suited for streaming videos, so the author Jon Henshaw recommends uploading the video itself to a CDN, even if you self-host the VTT file.

The lang attribute: browsers telling lies, telling sweet little lies

  • Manuel Matuzović shares some useful CSS that can alert you to a missing, empty or incorrect page-level lang attribute. For example:
  • html:not([lang]) { border: 10px dotted red; }
  • Manuel explains why setting the right value is important for screen reader support, as well things like auto translate.
  • There’s an interesting section on quotation marks, highlighting the difference in style between English, German and French quotation mark notation. I wasn’t aware they were different!

Interaction Media Features and Their Potential (for Incorrect Assumptions)

  • A really interesting CSS-Tricks article by Patrick Lauke, exploring the Media Queries Level 4 Interaction Media Features.
  • In theory, they enable the detection of things like if the user is using a mouse or a touch screen (@media (pointer: fine|coarse) {), which you could use to decide whether to make buttons and touch targets bigger. It also exposes hover support: @media (hover: hover|none).
  • In practice, these queries only expose what the browser thinks is the primary input. The user may have a mouse but choose to use their touch screen, or may have an iPhone but primarily navigate via a Bluetooth linked keyboard.
  • There is another set of media queries that report on all available inputs: any-pointer and any-hover. If any of the inputs has hover support, for example, then any-hover: hover will be matched.
  • We can combine queries for educated guesses. @media (pointer: coarse) and (any-pointer: fine) suggests the primary input is touchscreen, but that there is a mouse or stylus present.
  • We risk breaking the user experience by optimising for the wrong input type. We should follow a progressive enhancement approach, e.g. always listen to mouse/keyboard events but also listen for touchstart events if a coarse pointer is detected. Another option is to provide users an explicit choice of ‘Mouse’ vs ‘Touch’.

Accessibility in tech improved in 2020, but more must be done

  • A mammoth article highlighting the key accessibility improvements made by the 6 giants of tech: Apple, Google, Microsoft, Amazon, Facebook and Twitter. There’s a small conclusion at the end, briefly mentioning a few household names that have yet to fix fundamental issues in their apps, but the majority of the article is focused positively on the companies above.
  • I learned that Microsoft deliberately designed Xbox Series S/X boxes so that they could be more easily opened unassisted by people with disabilities, and that the consoles’ ports have tactile nubs to help low vision users identify them. I also learned that Amazon have teamed up with Voiceitt – a speech recognition company – to make Alexa usable by people with speech impairments.
  • Thanks to Matt Hobbs’ Frontend Fuel for linking me to the article.

Death of the PDF? Not quite, but it’s great news for accessibility

  • Danny Bluestone writes about the significance of the change in content design guidance on GOV.UK, which came into effect on 7th December. The updated guidance states “If you publish a PDF or other non-HTML document without an accessible version, you may be breaking the law”. Government departments are expected to phase out their usage of PDF as a way of publishing content.
  • The article highlights some great reasons why PDFs don’t work well online: they’re not responsive (so don’t scale on mobile), it’s difficult for visually impaired users to change their colour scheme and text size, and they easily become out of date as they’re harder to maintain.

Microsoft Backs Development of Smart Cane for Visually Impaired

  • An interesting idea from London-based startup WeWalk, who have recently joined Microsoft‘s AI for Accessibility program. Their ‘smart cane’ uses ultrasonic object detection to spot hazards such as parked cars, and paired with a smartphone app also features turn-by-turn GPS navigation and taxi-booking facilities. The cane will retail at $600.

Is Progressive Enhancement Dead Yet? (video, 8 mins)

  • Another Heydon Pickering ‘Web Briefs’ video, with a somewhat clickbaity title. This isn’t an analysis of frontend strategies in 2021, but a characteristically opinionated explanation of what good vs bad progressive enhancement looks like. In it, Heydon reinforces that:
  • Sites should be functional and have decent layouts by default. Using CSS supports checking, you can progressively enhance to better layouts, and should not use JavaScript to ‘fill in’ unsupported CSS because it’s inefficient at rendering. JS modules should be imported using <script type="module">, which is ignored by older browsers.
  • Progressive enhancement is not displaying a “Please turn on JavaScript” message, or rendering HTML only for it to re-render with JS ‘hydration’.

Focus management and inert

  • Article by Eric Bailey, reminding developers to avoid manually specifying a tab order with tabindex="[positive integer]" (there is, arguably, never a good reason to do this). But using tabindex="-1" is great for building accessible widgets: it makes elements focusable with JavaScript or click/tap, where it would otherwise not be focusable (i.e. if it is not a link, a button or an input).
  • One of the hardest things to get right is “focus trapping“: restricting focus events so that they only apply to elements within your modal, so that keyboard users don’t get lost tabbing through invisible elements underneath. The inert attribute makes implementation a lot easier. Assuming your modal is in a <div> outside of your <main>, apply the attribute with <main inert> and nothing within will be focusable. Browser support is extremely poor at the moment, but expect that to change in 2021.
  • I learned about a screen reader mode I hadn’t heard of: “interaction mode“. This allows users to explore the page with a ‘virtual cursor’, without applying focus to any of the content. Naturally that won’t play well with your modal, so liberal use of aria-hidden="true" is the answer.

I then thought I’d try something different, and bring you five different articles on screen readers. Let me know if you enjoy #WeekOfScreenReader and whether you’d like some more themed digests like this! What better place to start than with a nice, digestible history of screen readers?

A Brief History of Screen Readers

  • The first screen reader (for DOS) was created in 1986 by IBM Researcher and Accessibility Pioneer, Jim Thatcher. IBM Screen Reader/2 was developed to work with Windows 95 and IBM OS/2. To this day, Jim’s family sponsors the annual Jim Thatcher Prize, awarded to individuals who technically advance tools that improve access and increase participation for people with disabilities.
  • Since 2009, WebAIM has surveyed screen reader users every year to monitor their preferences. The 2019 results show that NVDA, Jaws and VoiceOver are the most used on desktop/laptop, and VoiceOver on mobile.
  • The article – as the title suggests – is brief, and jumps straight to present day screen readers, with a one line summary of their histories:
    • JAWS (Job Access With Speech) was developed by Freedom Scientific, for DOS and then Windows.
    • NVDA (Nonvisual Desktop Access) was first released in 2006.
    • Given VoiceOver’s popularity, the article offers frustratingly little by way of its history. So for completion, here are my conclusions from a quick search: VoiceOver first appeared in OS X 10.4 (Tiger) in 2005. It was then added to the iPod Shuffle – which had no screen – to read out song titles, and was intended to be used by all rather than marketed as an accessibility feature. It was first added to iOS with the release of the (third generation) iPhone 3GS in 2009.
    • Other screen readers are mentioned in passing too; I’ve added quick notes for some of these. Microsoft’s Narrator (built into Windows 2000 and above), Linux’s Orca (released in 2006 by Sun Microsystems – now Oracle), Android’s TalkBack and ChromeOS’s ChromeVox.

A message to web developers, from a screen reader user

  • Holly Tuke, who is blind, explains to web developers the positive impact they can have on her web experience by following some simple tips:
    • Try to get comfortable with a screen reader – it will make it easier to spot issues in the code. Turn your monitor off to stop you glossing over mistakes.
    • As a screen reader user, Holly relies on keyboard only navigation. Get your site working with keyboard and you’re a lot of the way there.
    • The most common issues are unlabelled links and buttons, inaccessible web forms, and no heading structure (h1, h2 etc).
  • 98% of the most visited websites in the world do not meet all accessibility standards. The positive thing career-wise is that “web developers who champion accessibility have an opportunity to stand out”.

A developer’s perspective: the problem with screen reader testing

  • Jake Tracey laments the sheer number of different screen readers and browser/OS combinations, and the lack of data around screen reader versions. Jake argues we can realistically only test in the latest versions of screen readers, like we do with Firefox and Chrome.
  • Automated tests are well and good, but can only tell you if your code is valid. Given screen reader support for ARIA is still patchy in different browser combinations, you can only verify your website works for your users by manually testing in the same combos as them.
  • The market share of desktop screen readers shows a steady rise in NVDA, which overtook JAWS as the most dominant screen reader back in 2019. For this reason, Jake suggests concentrating testing efforts on NVDA (with Windows and Chrome), as its market share is only set to increase further, especially given it is free and JAWS is paid.
  • We should also test on macOS Safari with VoiceOver, iOS Safari with VoiceOver, and Android with TalkBack.
  • Less popular screen readers should be tested by a dedicated accessibility tester on your team; developers won’t have time.

Thoughts on screen readers and image recognition

  • LĂ©onie Watson talks about image alt text and the fact that over 30% of homepage images are missing text descriptions. An additional 10% had useless alt text such as “image” or “blank”.
  • Screen readers have Optical Character Recognition (OCR) support, which can examine a graphic and convert it to text. I hadn’t heard of this as a feature of screen readers, but a quick search shows there’s an OCR add-on for NVDA.
  • Some screen readers – such as VoiceOver on iOS – now have image recognition capabilities too. I talked about this in dai11y 22/12/2020: iOS 14 can recognise icons and buttons even if they’re not marked up as such.
  • LĂ©onie tested the Picture Smart feature in JAWS on an image of the Mona Lisa. It identified that it contained a “drawing, human face, painting, person, sketch and woman”, and that it “probably” contained “art, portrait and text”. This is a good result compared to its analysis of a more obscure image, which was far less descriptive.
  • She concludes: “image recognition in screen readers is a massive improvement over the absence of anything better, but it isn’t better than a text description provided by a content author who knows exactly what’s in the image, why its being used and the context its being used in.”

We finish our #WeekOfScreenReader with a double article special: technical deep dives into data tables and dialog focus:

Article 1: How screen readers navigate data tables

  • LĂ©onie Watson describes how she navigates data tables with NVDA (alternatively you can watch the video demonstration – 2 mins). Her demo table is marked up with a caption, a first row containing all <th> heading cells, and a first column containing all <th> cells. LĂ©onie first navigates the heading columns and rows to figure out what’s in the table, then zones in on a particular row that may be of interest. Depending on whether she’s moving left/right or up/down, NVDA will either repeat the column heading or the row heading before announcing the selected cell contents. In other words, it relies on her remembering the value of the unannounced heading. This is to reduce verbosity.

Article 2: Dialog Focus in Screen Readers

  • Adrian Roselli discusses creating an accessible modal, using the inert keyword described in dai11y 18/01/2021. But where should the focus go when the modal is opened – the modal itself, the heading within the modal, or the close button?
  • Adrian tries out several screen reader / browser / OS combinations to hear what is announced in each of the 3 focus scenarios. The results differ wildly.
  • Adrian avoids concluding where the focus should go, saying that you should test with your users. In other words, consider which focus gives the best UX for the most popular screen reader combos your audience is using.

Whew, that was a long newsletter! Did you know that you can subscribe to smaller, more frequent updates? The dai11y, week11y and fortnight11y newsletters get exactly the same content. The choice is entirely up to you! Curated with ♥ by developer @ChrisBAshton.

Loading...