week11y issue 60

Your weekly frequent11y newsletter, brought to you by @ChrisBAshton:

Hello! This week I thought I’d try something different, and bring you five different articles on screen readers. Let me know if you enjoy #WeekOfScreenReader and whether you’d like some more themed digests like this!

What better place to start than with a nice, digestible history of screen readers?

A Brief History of Screen Readers

  • The first screen reader (for DOS) was created in 1986 by IBM Researcher and Accessibility Pioneer, Jim Thatcher. IBM Screen Reader/2 was developed to work with Windows 95 and IBM OS/2. To this day, Jim’s family sponsors the annual Jim Thatcher Prize, awarded to individuals who technically advance tools that improve access and increase participation for people with disabilities.
  • Since 2009, WebAIM has surveyed screen reader users every year to monitor their preferences. The 2019 results show that NVDA, Jaws and VoiceOver are the most used on desktop/laptop, and VoiceOver on mobile.
  • The article – as the title suggests – is brief, and jumps straight to present day screen readers, with a one line summary of their histories:
    • JAWS (Job Access With Speech) was developed by Freedom Scientific, for DOS and then Windows.
    • NVDA (Nonvisual Desktop Access) was first released in 2006.
    • Given VoiceOver’s popularity, the article offers frustratingly little by way of its history. So for completion, here are my conclusions from a quick search: VoiceOver first appeared in OS X 10.4 (Tiger) in 2005. It was then added to the iPod Shuffle – which had no screen – to read out song titles, and was intended to be used by all rather than marketed as an accessibility feature. It was first added to iOS with the release of the (third generation) iPhone 3GS in 2009.
    • Other screen readers are mentioned in passing too; I’ve added quick notes for some of these. Microsoft’s Narrator (built into Windows 2000 and above), Linux’s Orca (released in 2006 by Sun Microsystems – now Oracle), Android’s TalkBack and ChromeOS’s ChromeVox.

A message to web developers, from a screen reader user

  • Holly Tuke, who is blind, explains to web developers the positive impact they can have on her web experience by following some simple tips:
    • Try to get comfortable with a screen reader – it will make it easier to spot issues in the code. Turn your monitor off to stop you glossing over mistakes.
    • As a screen reader user, Holly relies on keyboard only navigation. Get your site working with keyboard and you’re a lot of the way there.
    • The most common issues are unlabelled links and buttons, inaccessible web forms, and no heading structure (h1, h2 etc).
  • 98% of the most visited websites in the world do not meet all accessibility standards. The positive thing career-wise is that “web developers who champion accessibility have an opportunity to stand out”.

A developer’s perspective: the problem with screen reader testing

  • Jake Tracey laments the sheer number of different screen readers and browser/OS combinations, and the lack of data around screen reader versions. Jake argues we can realistically only test in the latest versions of screen readers, like we do with Firefox and Chrome.
  • Automated tests are well and good, but can only tell you if your code is valid. Given screen reader support for ARIA is still patchy in different browser combinations, you can only verify your website works for your users by manually testing in the same combos as them.
  • The market share of desktop screen readers shows a steady rise in NVDA, which overtook JAWS as the most dominant screen reader back in 2019. For this reason, Jake suggests concentrating testing efforts on NVDA (with Windows and Chrome), as its market share is only set to increase further, especially given it is free and JAWS is paid.
  • We should also test on macOS Safari with VoiceOver, iOS Safari with VoiceOver, and Android with TalkBack.
  • Less popular screen readers should be tested by a dedicated accessibility tester on your team; developers won’t have time.

Thoughts on screen readers and image recognition

  • LĂ©onie Watson talks about image alt text and the fact that over 30% of homepage images are missing text descriptions. An additional 10% had useless alt text such as “image” or “blank”.
  • Screen readers have Optical Character Recognition (OCR) support, which can examine a graphic and convert it to text. I hadn’t heard of this as a feature of screen readers, but a quick search shows there’s an OCR add-on for NVDA.
  • Some screen readers – such as VoiceOver on iOS – now have image recognition capabilities too. I talked about this in dai11y 22/12/2020: iOS 14 can recognise icons and buttons even if they’re not marked up as such.
  • LĂ©onie tested the Picture Smart feature in JAWS on an image of the Mona Lisa. It identified that it contained a “drawing, human face, painting, person, sketch and woman”, and that it “probably” contained “art, portrait and text”. This is a good result compared to its analysis of a more obscure image, which was far less descriptive.
  • She concludes: “image recognition in screen readers is a massive improvement over the absence of anything better, but it isn’t better than a text description provided by a content author who knows exactly what’s in the image, why its being used and the context its being used in.”

We finish our #WeekOfScreenReader with a double article special: technical deep dives into data tables and dialog focus:

Article 1: How screen readers navigate data tables

  • LĂ©onie Watson describes how she navigates data tables with NVDA (alternatively you can watch the video demonstration – 2 mins). Her demo table is marked up with a caption, a first row containing all <th> heading cells, and a first column containing all <th> cells. LĂ©onie first navigates the heading columns and rows to figure out what’s in the table, then zones in on a particular row that may be of interest. Depending on whether she’s moving left/right or up/down, NVDA will either repeat the column heading or the row heading before announcing the selected cell contents. In other words, it relies on her remembering the value of the unannounced heading. This is to reduce verbosity.

Article 2: Dialog Focus in Screen Readers

  • Adrian Roselli discusses creating an accessible modal, using the inert keyword described in dai11y 18/01/2021. But where should the focus go when the modal is opened – the modal itself, the heading within the modal, or the close button?
  • Adrian tries out several screen reader / browser / OS combinations to hear what is announced in each of the 3 focus scenarios. The results differ wildly.
  • Adrian avoids concluding where the focus should go, saying that you should test with your users. In other words, consider which focus gives the best UX for the most popular screen reader combos your audience is using.

Did you know that you can subscribe to dai11y, week11y, fortnight11y or month11y updates! Every newsletter gets the same content; it is your choice to have short, regular emails or longer, less frequent ones. Curated with ♥ by developer @ChrisBAshton.

Loading...