Testing the accessibility of the user experience from start to finish can ensure that all of our customers can complete their intended task while visiting the page. Tools and quick tests can detect accessibility bugs that are interfering with page tasks. Automated testing should always be complemented with manual testing. Testing tools can check for most standards, but manual testing validates if the content is correct or if the tool is easy to use.
These quick tests will help you to focus on priority workflows and tasks.
Here are a few manual checks to do when reviewing a site for accessibility.
Page structure, text, and headings:
Labels and alternate text:
Design and test different types of colour blindness:
There are two methods for testing for digital accessibility. Use automated scanning tools to identify coding errors that need little human verification, saving both time and cost.
End to end manual testing must be complete to evaluate the user-impact on the individual. Using this methodology we can ensure different assistive technology and user needs are validated.
Test with these items in either this order or by using the
Desktop
Keyboard
Text resize to 200%
Colour contrast
High contrast Themes
Reduced motion
Multimedia and documents inclusion
Screen reader
Automated tests
Mobile
Switch control / access
Larger text / pinch zoom
Colour contrast
Inverse colour / grayscale
Reduced motion
Multimedia and documents inclusion
Screen reader
Automated tests
Automated scanners help identify about 20%* of code related accessibility errors. These tests are important during the development and maintenance stage of a website or app.
They are also simple to use, and any role can use one to check if the easy wins for accessibility are complete.
Automated scanners alert you to errors like:
Form fields - Automated tests make sure that all entry fields have labels. They do not check if the label is accurate
Colour - Automated tests verify the use of correct colour combinations in a text. False positives may appear
*This is an average as it depends on what sort of elements exist on the page.
Manual testing reviews the user experience of the product in the way someone with accessibility needs might.
The test is a combination of keyboard-only interactions, assistive computer technologies, and web browser or OS settings.
Manual testing accounts for the other 80%* of errors that scanners cannot test, including:
Presence of informative page titles - make sure the page title is unique, relevant, and concise. Page titles are what is visible in tabs or bookmarks and should be appropriate to the page content and/or task
“Skip navigation” option - Manual tests ensure that the option to skip repeated navigational elements is present and correct
*This is an average as it depends on what sort of elements exist on the page.
Screen readers are text-to-speech engines that help blind or visually impaired people to read digital content. Use a screen reader to confirm the proper announcement of labels and descriptive text.
If there is a bug detected, testers must report the priority of the bug. Note that all accessibility defects must be resolved; use the priority list below to organize your backlog. Bugs that are captured through automated testing must be fixed before launch.
Bug priority levels:
Critical (task cannot happen)
High (task is difficult to complete)
Medium (it is inconvenient to complete a task)
Low (the experience is frustrating, but it does not affect the outcome)