CSUN: Unified Accessibility Evaluation Methodology

A presentation by Jonathan Avila and Tim Springer of SSB Bart group about their accessibility auditing (evaluation) methodology.

They start off with some impressive figures about their organisation, and talk about being very objective in the evaluation process. (AC: We tend to approach accessibility more from a human / usability point of view, i.e. whether real people would be able to complete realistic tasks, so this will be an interesting alternative view!)

NB: They define accessibility as a barrier to access, rather than whether the interface is usable to someone with a disability.

The W3C is proposing a particular methodology, which SSB have found is very similar to how they work.

Jonathan Avila and Tim Springer from SSB Bart Group

Goals

They consider the purpose to inform stakeholders about the level of conformance to a standard, often for benchmarking, remediation of issues or to document conformance.

Scope

You need to start off determining the scope for methodology, platforms, standards, site vs app etc.

Define the site to be tested, is it desktop, mobile, public facing, etc.

Define the standards to check against, WCAG (and level), Section 508, CVAA, EN 301-549 (AC: known as Mandate 376), Agency of organisation specific standards.

What user-agents are used? WCAG talks about accessibility-supported, so you have to define what assistive technologies might be used. That can be tricky in a public facing website. You really do need to define that baseline.

What methodology / test process are you using? Generally include steps and failures, it can be easiest to start with the WCAG failure techniques, then check for success.

You can streamline the process, for example, noting repeated elements or frameworks and just testing those once, rather than testing the header on every page.

The sample size needs to be representative, the factors are often: size of the site, portal vs microsite, complexity, web application, dynamic vs static sites, consistency of the site.

For your sample choose: common pages with the header/footer; contact / support pages; pages for the core tasks; pages with different technologies (e.g. Flash); pages that complete processes; high-traffic pages. You almost certainly need assistance from the site owner or development team.

Section 508 has an often-overlooked requirements for documentation, so include online help, PDF, user guides etc.

The W3C evaluation methodology proposes a random sample which is a good idea because if you pick a representative sample and have a random sample, check if the random sample brought up new issues.

Sometimes you might need to make sure there is good data in the system. For example, a bank might not have a version of the site they can show you with real data in it. You might need to include dummy (but good) data.

Keep a record of what you testing, for example use the Firefox toolbar for AMP captures page name/title, path, URL, screenshot & DOM. You need this for the record.
NB: AMP is their testing platform.

Note the common/needed use-cases, which should be mostly covered in your sample, and run through them with common assistive technologies (e.g. Jaws 15/Windows, Voiceover/iOS, NVDA/Firefox). The client may have specific ones, but they should cover a good range of the site’s functionality.

Often they have found that the site has changed shortly after, or even during the testing. It’s best to confirm that with clients first.

They have also found that it can help to provide your interpretation of each guideline as a central resource for the organisation. Partly to be explicit about what you are testing, but there are also good legal reasons for doing this. (AC: Not explained directly.)

Then audit each page/module, keep note of your results as you go along.

Test for the known failures (they effectively carry more weight in WCAG), they are one-way and you can be fairly sure the page has failed. Meeting success criteria might mean it passes. Be sure to test all steps in a process, and check for alternatives if it isn’t directly accessible.

Tools

For automatic testing they use a headless browser to capture the test pages into their platform (AMP), which checks the DOM rather than source code. Hwoever, be aware it will only find ~25% of the issues.

They use ‘guided automatic’, where the test engine (AMP) will find likely candidates for accessibility violations but they need to be confirmed by the tester.

Manual testing requires a live person, and tends to be the most expensive. They use the tools to focus the manual testers time.

Some other tools they use for mobile devices are Adobe Edge Inspect (for DOM and screenshots) and Safari developer tools. However, the mobile tools are limited.

Other tools to help are:

  • Contrast checker
  • AccChecker, aDesigner
  • Java ferret
  • PDDomview (PDF)
  • Accessibility Inspector Xcode (iOS)
  • Lint via Eclipse (Android)
  • Favlets (web)
  • Keyboard
  • High contrast mode
  • Zoom
  • Assistive technologies, however, be careful as the results maybe skewed from a technical point of view.

Process

  • Go through each checkpoint/best practice, using the toolbar, see whether it is met.
  • Provide a description of the “violation”, including user impact.
  • describe how to fix the issue, if possible / appropriate.
  • Combine ‘pattern violations’ across pages.

After that, refine the results and report back, including removing false positives, cross-validating the results across the team, generate a score, and prioritise the “violations” for remediation.

See also

The W3C’s evaluation methodology version 1. They had a few others but I didn’t quite have time to capture them.

2 contributions to “CSUN: Unified Accessibility Evaluation Methodology

Comments are closed.