fortnight11y issue 30

Your fortnightly frequent11y newsletter, brought to you by @ChrisBAshton:

Accessibility in tech improved in 2020, but more must be done

  • A mammoth article highlighting the key accessibility improvements made by the 6 giants of tech: Apple, Google, Microsoft, Amazon, Facebook and Twitter. There’s a small conclusion at the end, briefly mentioning a few household names that have yet to fix fundamental issues in their apps, but the majority of the article is focused positively on the companies above.
  • I learned that Microsoft deliberately designed Xbox Series S/X boxes so that they could be more easily opened unassisted by people with disabilities, and that the consoles’ ports have tactile nubs to help low vision users identify them. I also learned that Amazon have teamed up with Voiceitt – a speech recognition company – to make Alexa usable by people with speech impairments.
  • Thanks to Matt Hobbs’ Frontend Fuel for linking me to the article.

Death of the PDF? Not quite, but it’s great news for accessibility

  • Danny Bluestone writes about the significance of the change in content design guidance on GOV.UK, which came into effect on 7th December. The updated guidance states “If you publish a PDF or other non-HTML document without an accessible version, you may be breaking the law”. Government departments are expected to phase out their usage of PDF as a way of publishing content.
  • The article highlights some great reasons why PDFs don’t work well online: they’re not responsive (so don’t scale on mobile), it’s difficult for visually impaired users to change their colour scheme and text size, and they easily become out of date as they’re harder to maintain.

Microsoft Backs Development of Smart Cane for Visually Impaired

  • An interesting idea from London-based startup WeWalk, who have recently joined Microsoft‘s AI for Accessibility program. Their ‘smart cane’ uses ultrasonic object detection to spot hazards such as parked cars, and paired with a smartphone app also features turn-by-turn GPS navigation and taxi-booking facilities. The cane will retail at $600.

Is Progressive Enhancement Dead Yet? (video, 8 mins)

  • Another Heydon Pickering ‘Web Briefs’ video, with a somewhat clickbaity title. This isn’t an analysis of frontend strategies in 2021, but a characteristically opinionated explanation of what good vs bad progressive enhancement looks like. In it, Heydon reinforces that:
  • Sites should be functional and have decent layouts by default. Using CSS supports checking, you can progressively enhance to better layouts, and should not use JavaScript to ‘fill in’ unsupported CSS because it’s inefficient at rendering. JS modules should be imported using <script type="module">, which is ignored by older browsers.
  • Progressive enhancement is not displaying a “Please turn on JavaScript” message, or rendering HTML only for it to re-render with JS ‘hydration’.

Focus management and inert

  • Article by Eric Bailey, reminding developers to avoid manually specifying a tab order with tabindex="[positive integer]" (there is, arguably, never a good reason to do this). But using tabindex="-1" is great for building accessible widgets: it makes elements focusable with JavaScript or click/tap, where it would otherwise not be focusable (i.e. if it is not a link, a button or an input).
  • One of the hardest things to get right is “focus trapping“: restricting focus events so that they only apply to elements within your modal, so that keyboard users don’t get lost tabbing through invisible elements underneath. The inert attribute makes implementation a lot easier. Assuming your modal is in a <div> outside of your <main>, apply the attribute with <main inert> and nothing within will be focusable. Browser support is extremely poor at the moment, but expect that to change in 2021.
  • I learned about a screen reader mode I hadn’t heard of: “interaction mode“. This allows users to explore the page with a ‘virtual cursor’, without applying focus to any of the content. Naturally that won’t play well with your modal, so liberal use of aria-hidden="true" is the answer.

I then thought I’d try something different, and bring you five different articles on screen readers. Let me know if you enjoy #WeekOfScreenReader and whether you’d like some more themed digests like this! What better place to start than with a nice, digestible history of screen readers?

A Brief History of Screen Readers

  • The first screen reader (for DOS) was created in 1986 by IBM Researcher and Accessibility Pioneer, Jim Thatcher. IBM Screen Reader/2 was developed to work with Windows 95 and IBM OS/2. To this day, Jim’s family sponsors the annual Jim Thatcher Prize, awarded to individuals who technically advance tools that improve access and increase participation for people with disabilities.
  • Since 2009, WebAIM has surveyed screen reader users every year to monitor their preferences. The 2019 results show that NVDA, Jaws and VoiceOver are the most used on desktop/laptop, and VoiceOver on mobile.
  • The article – as the title suggests – is brief, and jumps straight to present day screen readers, with a one line summary of their histories:
    • JAWS (Job Access With Speech) was developed by Freedom Scientific, for DOS and then Windows.
    • NVDA (Nonvisual Desktop Access) was first released in 2006.
    • Given VoiceOver’s popularity, the article offers frustratingly little by way of its history. So for completion, here are my conclusions from a quick search: VoiceOver first appeared in OS X 10.4 (Tiger) in 2005. It was then added to the iPod Shuffle – which had no screen – to read out song titles, and was intended to be used by all rather than marketed as an accessibility feature. It was first added to iOS with the release of the (third generation) iPhone 3GS in 2009.
    • Other screen readers are mentioned in passing too; I’ve added quick notes for some of these. Microsoft’s Narrator (built into Windows 2000 and above), Linux’s Orca (released in 2006 by Sun Microsystems – now Oracle), Android’s TalkBack and ChromeOS’s ChromeVox.

A message to web developers, from a screen reader user

  • Holly Tuke, who is blind, explains to web developers the positive impact they can have on her web experience by following some simple tips:
    • Try to get comfortable with a screen reader – it will make it easier to spot issues in the code. Turn your monitor off to stop you glossing over mistakes.
    • As a screen reader user, Holly relies on keyboard only navigation. Get your site working with keyboard and you’re a lot of the way there.
    • The most common issues are unlabelled links and buttons, inaccessible web forms, and no heading structure (h1, h2 etc).
  • 98% of the most visited websites in the world do not meet all accessibility standards. The positive thing career-wise is that “web developers who champion accessibility have an opportunity to stand out”.

A developer’s perspective: the problem with screen reader testing

  • Jake Tracey laments the sheer number of different screen readers and browser/OS combinations, and the lack of data around screen reader versions. Jake argues we can realistically only test in the latest versions of screen readers, like we do with Firefox and Chrome.
  • Automated tests are well and good, but can only tell you if your code is valid. Given screen reader support for ARIA is still patchy in different browser combinations, you can only verify your website works for your users by manually testing in the same combos as them.
  • The market share of desktop screen readers shows a steady rise in NVDA, which overtook JAWS as the most dominant screen reader back in 2019. For this reason, Jake suggests concentrating testing efforts on NVDA (with Windows and Chrome), as its market share is only set to increase further, especially given it is free and JAWS is paid.
  • We should also test on macOS Safari with VoiceOver, iOS Safari with VoiceOver, and Android with TalkBack.
  • Less popular screen readers should be tested by a dedicated accessibility tester on your team; developers won’t have time.

Thoughts on screen readers and image recognition

  • LĂ©onie Watson talks about image alt text and the fact that over 30% of homepage images are missing text descriptions. An additional 10% had useless alt text such as “image” or “blank”.
  • Screen readers have Optical Character Recognition (OCR) support, which can examine a graphic and convert it to text. I hadn’t heard of this as a feature of screen readers, but a quick search shows there’s an OCR add-on for NVDA.
  • Some screen readers – such as VoiceOver on iOS – now have image recognition capabilities too. I talked about this in dai11y 22/12/2020: iOS 14 can recognise icons and buttons even if they’re not marked up as such.
  • LĂ©onie tested the Picture Smart feature in JAWS on an image of the Mona Lisa. It identified that it contained a “drawing, human face, painting, person, sketch and woman”, and that it “probably” contained “art, portrait and text”. This is a good result compared to its analysis of a more obscure image, which was far less descriptive.
  • She concludes: “image recognition in screen readers is a massive improvement over the absence of anything better, but it isn’t better than a text description provided by a content author who knows exactly what’s in the image, why its being used and the context its being used in.”

We finish our #WeekOfScreenReader with a double article special: technical deep dives into data tables and dialog focus:

Article 1: How screen readers navigate data tables

  • LĂ©onie Watson describes how she navigates data tables with NVDA (alternatively you can watch the video demonstration – 2 mins). Her demo table is marked up with a caption, a first row containing all <th> heading cells, and a first column containing all <th> cells. LĂ©onie first navigates the heading columns and rows to figure out what’s in the table, then zones in on a particular row that may be of interest. Depending on whether she’s moving left/right or up/down, NVDA will either repeat the column heading or the row heading before announcing the selected cell contents. In other words, it relies on her remembering the value of the unannounced heading. This is to reduce verbosity.

Article 2: Dialog Focus in Screen Readers

  • Adrian Roselli discusses creating an accessible modal, using the inert keyword described in dai11y 18/01/2021. But where should the focus go when the modal is opened – the modal itself, the heading within the modal, or the close button?
  • Adrian tries out several screen reader / browser / OS combinations to hear what is announced in each of the 3 focus scenarios. The results differ wildly.
  • Adrian avoids concluding where the focus should go, saying that you should test with your users. In other words, consider which focus gives the best UX for the most popular screen reader combos your audience is using.

Did you know that you can subscribe to dai11y, week11y, fortnight11y or month11y updates! Every newsletter gets the same content; it is your choice to have short, regular emails or longer, less frequent ones. Curated with ♥ by developer @ChrisBAshton.

Loading...