Towards accessibility

Towards accessibility

Friday is the end of the working day. Bad news always comes at the end of the working day on Friday.

You are about to leave the office, a new letter about another reorganization has just arrived in the mail.

Thank you xxxx, yyy from today you will report zzzz
...
And Hugh's team will ensure our products are accessible to people with disabilities.

Oh no! Why did I deserve this? Do they want me to leave? Set yourself up for thankless hard work and trying to correct other people's mistakes. This is definitely a failure...

This was the availability a few years ago. Some poor souls were given the job of "cleaning up" the UI to try and make it accessible to people with disabilities.

What this actually meant was pretty vague - presumably if you could see a focus indicator and tab through fields, have some alt text and a couple of field descriptions, it would be considered that your application is accessible ...

But suddenly the “bugs” began to multiply at the speed of an avalanche.

Various screen readers (Engl. Screen Readers) and browsers behaved completely differently.

Users have complained that the app is unusable.

As soon as an error was corrected in one place, another appeared in another.

And simply changing and correcting user interface errors required Herculean efforts.

I was there. I survived, but we didn't "succeed" - technically we cleaned up a lot, added a lot of field descriptions, roles, and achieved some level of compliance, but no one was happy. Users still complained that they could not navigate the application. The manager still complained about the constant stream of errors. Engineers complained that the problem was posed incorrectly, with no clearly defined “correct” solution that would work in all cases.

There were some decidedly eye-opening moments along my journey to understanding accessibility.
Perhaps the first was the realization that adding accessibility functionality on top of a finished product was difficult. And it’s even harder to convince managers that it’s incredibly difficult! No, it's not just "add a few tags" and the UI will work just fine. No, this cannot be completed in three weeks; even three months will not be enough.
My next moment of truth came when I saw firsthand how blind users actually use our app. This is SO different from looking at error messages.

I'll come back to this again and again, but almost all of our "assumptions" about how people used our app were wrong.

Navigating a complex user interface using keystrokes Tab/Shift+Tab – this sucks! We need something better. Keyboard shortcuts, headers.

Losing focus when changing the UI isn't a big problem, is it? Let's think again - this is incredibly confusing.

I continued, worked on different projects for a while, and then we started a new project, with a complex user interface and a clear installation, to finally get accessibility right this time.

So, we took a step back and looked at how we could implement this differently and succeed, and make the process less boring!

Quite quickly we came to some conclusions:

  1. We didn't want people developing the user interface to mess with aria labels/roles and, of course, the HTML structure of the components. We needed to provide them with the right components that built accessibility right out of the box.
  2. Accessibility == Ease of use – i.e. This is not just a technical challenge. We needed to change the entire design process and ensure that accessibility was taken into account and discussed before UI design began. You need to think early on how users will discover any functionality, how they will navigate, and how right-clicking from the keyboard will work. Accessibility should be an integral part of the design process - for some users it is much more than just the appearance of the application.
  3. From the very beginning, we wanted to get feedback from blind and other disabled users about the ease of use of the application.
  4. We needed really good ways to catch accessibility regressions.

Well, from an engineering point of view, the first part sounded quite fun - developing an architecture and implementing a library of components. And indeed it was so.

Taking a step back, looking ARIA examples and by thinking of this as a design problem rather than a "fitting in" problem, we introduced some abstractions. A component has a 'Structure' (consists of HTML elements) and a 'Behaviour' (how it interacts with the user). For example, in the snippets below we have a simple unordered list. By adding "behaviors" the corresponding roles are added to the list to make it act like a list. We do the same for the menu.

Towards accessibility

In fact, not only are roles added here, but also event handlers for keyboard navigation.

This looks more neat. If we could get a clean separation between them, it wouldn't matter how the structure was created, we could apply Behaviors to it and get the accessibility right.

You can see this in action at https://stardust-ui.github.io/react/ – UX library React, which is designed and implemented with accessibility in mind from the start.

The second part - changing the approach and processes around design initially scared me: lowly engineers trying to push through organizational change doesn't always end well, but it turned out to be one of the most interesting areas where we made significant contributions to the process. In a nutshell, our process was as follows: new functionality would be developed by one team, then our leadership team would review/iterate the proposal, and then, once approved, the design would typically be handed over to the engineering team. In this case, the engineering team effectively “owned” the accessibility functionality because it was their responsibility to fix any issues associated with it.

In the beginning, it was quite a difficult job to explain that accessibility and usability are inextricably linked and that this had to be done at the design stage, otherwise it would lead to big changes and redefinitions of some roles. However, with the support of management and key players, we took the idea and put it into motion so that designs were tested for accessibility and usability before they were presented to management.

And this feedback was extremely valuable to everyone - it was fantastic as an exercise in knowledge sharing/communication about how users interact with web applications, we identified numerous UI problem areas before they were built, the development teams in now have much better specifications of not only visual, but also behavioral aspects of design. Real discussions are fun, energetic, passionate discussions about technical aspects and interactions.

We could do this even better if we had blind and disabled users at these (or subsequent) meetings - this was difficult to organize, but we do now work with both local blind organizations and companies , which provide external testing to verify execution flow early in development—both at the component and execution flow levels.

Engineers now have fairly detailed specifications, available components they can use to implement, and a way to validate the execution flow. Part of what experience has taught us is what we have been missing all along—how we can stop the regression. Likewise, people can use integration or end-to-end tests to test functionality, which we need to detect changes in interactions and execution flows—both visual and behavioral.

Determining visual regression is a fairly defined task, there is very little that can be added to the process other than perhaps checking whether focus is visible when navigating with the keyboard. More interesting are two relatively new technologies for working with accessibility.

  1. Accessibility Insights is a set of tools that can be run both in the browser and as part of the build/test cycle to identify problems.
  2. Verifying that screen readers work correctly has been a particularly challenging task. With the introduction of access to Accessibility DOM, we're finally able to take accessibility snapshots of the app, much like we do for visual tests, and test them for regression.

So, in the second part of the story, we moved from editing HTML code to working at a higher level of abstraction, changed the design development process and introduced thorough testing. New processes, new technologies, and new levels of abstraction have completely changed the landscape of accessibility and what it means to work in this space.
But this is only the beginning.

The next “understanding” is that blind users are driving cutting-edge technology – they are the ones who benefit the most not only from the changes we described earlier, but also that new approaches and ideas are made possible by ML/AI. For example, Immersive Reader technology allows users to present text more easily and clearly. It can be read aloud, sentence structure is broken down grammatically, and even word meanings are displayed graphically. This doesn't fit into the old "make it accessible" mentality at all - it's a usability feature that will help everyone.

ML/AI is enabling entirely new ways of interacting and working, and we are excited to be part of the next stages of this cutting-edge journey. Innovation is driven by a change in thinking - humanity has existed for millennia, machines for hundreds of years, websites for several decades, and smartphones even less, technology must adapt to people, and not vice versa.

PS The article has been translated with minor deviations from the original. As a co-author of this article, I agreed on these digressions with Hugh.

Only registered users can participate in the survey. Sign in, you are welcome.

Do you pay attention to the accessibility of your applications?

  • Yes

  • No

  • This is the first time I've heard about app accessibility.

17 users voted. 5 users abstained.

Source: habr.com

Add a comment