Accessibility Fail Fast Testing Prep
4 black and white cartoon drawings of people falling — off a
ladder, slipping on a floor, off a flight of stairs, and off a curb
It was Thomas Edison who famously said:
I have not failed, I’ve just found 10,000 ways that won’t work
While Thomas Edison never had to deal with the internet (or accessibility !), his principle of “learning from failure” is one commonly used in software projects today.
Fail fast is a type of QA methodology commonly seen in Agile projects. The idea is to find catastrophic failures early, by:
trying to test as early as possible rather than waiting until the end when the “formal” testing cycle traditionally occurs
testing known weak points and finding failures quickly in the QA cycle to send it back for rework BEFORE you have too much work invested that will have to be re-done if you find a catastrophic failure late in the game
Because only approximately 30 % of HTML accessibility testing can be automated and the other 70 % is manual, there is a lot to be said about using an accessibility flavor of the fail-fast methodology. No one wants to run three comprehensive manual accessibility test cycles because you failed something important at the end of the manual cycle. Prioritizing the important, “must work” things first means if you do have a catastrophic failure, you will find it with a minimum time investment.
There are three significant preparation steps you should be taking before testing begins:
Accessibility Fail Fast Preparation Step 1
Make sure accessibility testing and functional QA are separate and staggered on the project release schedule
There are three major reasons why separating function QA and accessibility testing is an important Fail Fast Accessibility strategy
If there are significant functional defects, progress in accessibility testing will be slow and painstaking (assuming progress can be made at all).
Significant functional defects combined with poor accessibility pretty much eliminate your ability to use native users of assistive technology as testers.
Accessibility testing will have to be repeated once the major functional defects have been resolved because the fixes to the defects likely touch the code that you’ve already tested. The retesting is required because you need to make sure the functional fix didn’t step on any of the code needed for accessibility.
Accessibility Fail Fast Preparation Step 2
Strategically evaluate what Assistive Technology and Browsers you are testing on
An annual WebAim screen reader user survey shows that more screen users than ever are using multiple screen readers, primarily
JAWS
NVDA
Voiceover
Narrator, Orca, and Talkback have noticeable (if small) followings.
Screen readers — Desktop
One interesting data point is that Windows Narrator seems to be trending up, but it is still well below the two most popular Windows screen readers JAWS and NVDA. Orca is essential if your software can be deployed on Linux.
Screen readers — Mobile
In the US, there is a strong preference by people with disabilities for VoiceOver. This is because of Apple’s incredibly strong commitment to accessibility. Outside of the US, it is more evenly split, likely because of the higher cost of Apple devices. Android is not well-liked in the general population of screen reader users, primarily because it is less robust than Apple with respect to its accessibility approach, and also because Talkback implementations can vary from hardware platform to hardware platform.
If the software’s user base is primarily American, test VoiceOver first
If the software’s user base is primarily outside of the US, test TalkBack first
**Make sure you test on oversized devices**
Magnification
People with vision loss who may or may not be legally blind use magnification over screen readers. Some people use magnification and screen readers together. There are about 500 % more magnification users than screen reader users. Because those users need as much screen real estate as possible, typically they use larger mobile devices. I have seen many plus-sized mobile device-specific defects, especially those related to pagination. These types of defects will show up on a plus-sized mobile device but not exist on a regular-sized mobile device. While I am sure there must be some of the reverses as well (a regular-sized mobile device bug that doesn’t show up on a plus-sized mobile device) I’ve never personally seen one.
Keyboard
Keyboard-only testing on all platforms is essential.
If your software isn’t keyboard accessible, it doesn’t matter what the rest of your accessibility test results are, this is an automatic release blocker from my perspective.
Proper keyboard behavior is one of the most important A-level WCAG 2.1 guidelines. Without keyboard accessibility:
People who are blind won’t be able to use the software, since one or more components will require mouse or touch interaction
Switches (which basically trigger keyboard behavior) are not likely to work, excluding everyone who uses this form of assistive technology.
Keyboard-only users without vision issues (carpal tunnel, broken arm) won’t be able to use it.
There is one small component of one keyboard-related guideline (tab index) that can be tested in an automated test. However, most of the WCAG keyboard operation guidelines, plus trap, hover, and the two keyboard focus indicator guidelines can only be validated through manual operation and visual inspection. Yes, the two keyboard focus indicator guidelines are AA. But, as long as you are testing keyboard operation, you might as well throw these in at the same time for significant time savings. Plus, as a keyboard-only user with glaucoma, the focus indicator requirement is very important to me 😊 so I always make my team include it with the A-level tests.
Browsers
If you officially support it, you need to test it for accessibility
Some screen readers naturally pair with certain browsers. For example:
NVDA is best experienced on Firefox.
VoiceOver is best experienced on Safari.
Don’t, under ANY circumstances, test JAWS on IE or Edge. VFO and Microsoft seem to be in some kind of standoff right now where Microsoft is refusing to fix Microsoft browser bugs that prevent JAWS from working properly on Edge. I am guessing that is because Microsoft is trying to push people over to Narrator. But that is just me reading between the lines.
Do identify in your accessibility statement or VPAT what screen reader/browser combinations (and the version numbers of each) you tested on. That way if someone has an issue with an environment, they will know which environment stack was most thoroughly reviewed.
Accessibility Fail Fast Preparation Step 3
You need a “Definition of Done”
Before you start testing, you need to know how to define when you are done testing. There are two definitions of done, one good, one not so good.
“Done” because the testing has failed and crucial defects need to be fixed before you want to invest more time in testing
“Done” because you are declaring the accessibility testing a success and the software is ready to release
The only way to define a “bad done” is to have one, single corporate-wide definition for what constitutes a critical defect. I have some thoughts on that in this article on prioritizing defects for remediation.
The best way that I know of to define “good done” is to have use cases related to your accessibility personas. Anything a well-defined persona can do, they should be able to do with assistive technology. A functional spec will also help you get there as well if you don’t have personas, though personas are better because they look at the UX from end-to-end and not disjointed pieces of behavior that should work.
…
In the follow-on to this article which will be published next week, I will talk about the steps to actually execute your Fail Fast Accessibility Strategy once you have prepared for it.