week11y issue 71

Your weekly frequent11y newsletter, brought to you by @ChrisBAshton:

Is your CAPTCHA keeping humans out?

  • CAPTCHAs are important for preventing DDoS attacks, as they prevent botnets from accessing processor-intensive parts of websites such as login forms. But they can give false positives, where CAPTCHAs filter out humans, which is particularly bad in the COVID-19 era where it is essential to be able to access services virtually. The article goes on to describe the history of CAPTCHA development:
  • reCAPTCHA is a CAPTCHA service company that was acquired by Google; it accounts for around 93% of all CAPTCHAs on the web.
  • Early versions of CAPTCHA software had users deciphering distorted words and numbers, and typing these into a box. These should no longer be used today, as they are entirely visual and therefore inaccessible to users with visual impairments.
  • reCAPTCHA version 2, released in 2014, analyses the way the cursor moves across the screen to determine whether the motion is likely to be human. If it isn’t, it presents the user with an audio or visual challenge, such as clicking images which contain fire hydrants.
  • reCAPTCHA version 3 was released in 2018; it eliminates user challenges altogether and returns a “probability score” indicating the likelihood the user is human. It is up to the developers to take extra steps if the score is low, e.g. authenticate the user through an email link.
  • The article closes by asking developers not to roll out their own CAPTCHA solutions, which are likely to be less accessible than the industry standards.

Alt text that informs: Meeting the needs of people who are blind or low vision

  • A really interesting article by Microsoft that is not (as I suspected from the headline) your typical “how to write good alt text” article.
  • A recent Microsoft study found that users who rely on alt text want different alt text depending on the context of the image:
  • “For example, if a photo of a person appeared in a news story, people might want a description that includes details about the setting of the image to give a sense of place. But if a photo of a person appeared on a social media or dating website, people might want increased details about that person’s appearance, including some details that may be subjective and/or sensitive, such as race, perceived gender, and attractiveness”.
  • “One participant mentioned that knowing the race and gender of people in photos of board members on an employer/employment website might help them understand whether the company values a diverse workplace. These latter examples illustrate practical and ethical challenges for emerging AI systems, such as whether AI systems can – or should – be trained to provide subjective judgments or information about sensitive demographic attributes.”
  • The article includes a table of contexts (such as e-commerce, news, dating) cross-checked against properties in an image that would be important to include in the alt text (e.g. weather, expression, hair colour), as indicated by the study’s participants.
  • Microsoft concludes that new categories of metadata should be produced to feed into improved machine learning models, and there should be “custom vision-to-language models” that give different alt text depending on the context in which an image appears.

Next up is a Steve Faulkner special, as I’ve had two of his blog posts bookmarked for some time!

  • re-upped: placeholder – the piss-take label
    • “While the hint given by the controls label is shown at all times, the short hint given in the placeholder attribute is only shown before the user enters a value. Furthermore, placeholder text may be mistaken for a pre-filled value, and as commonly implemented the default color of the placeholder text provides insufficient contrast and the lack of a separate visible label reduces the size of the hit region available for setting focus on the control.”
    • This bonus article from HTMHell adds that translation tools such as Google Translate may not translate attribute values, placeholder text gets cut off beyond the size of the field, and “if browsers auto-fill fields, users have to cut-and-paste auto-filled values to check if browsers filled in fields correctly”.
  • aria-description: By Public Demand and to Thunderous Applause
    • The new aria-description attribute coming to WAI-ARIA 1.3 is similar to aria-label (takes a string of text associated with an element), but is intended for more verbose information. Steve sees it as replacing aria-describedby in those cases where the linked element is visually hidden, i.e. <a href="#" aria-describedby="help">Help</a><div id="help" class="visually-hidden">This description is for screen reader users only</div>.
    • It’s supported in Chrome, Firefox and Edge already.
    • Steve closes with some advice: for aria-label, a word or phrase is better than a sentence, and for aria-describedby or aria-description, a sentence is better than a paragraph.

Apple Music Adds “Saylists” to Help People with Speech-Sound Disorders

  • At the end of March, Apple worked with Warner Music to launch the “Saylists” feature on Apple Music. This feature helps users find songs with lyrics and sounds which can be challenging to vocalise if you have a speech-sound disability/disorder (SSD), as one in 12 children in the UK do. Getting people with SSD to repeat hard and challenging sounds (such as words beginning with “ch”, “g”, “k” and “z”) is one of the most successful strategies to treat the disorder.

Did you know that you can subscribe to dai11y, week11y, fortnight11y or month11y updates! Every newsletter gets the same content; it is your choice to have short, regular emails or longer, less frequent ones. Curated with ♥ by developer @ChrisBAshton.

Loading...