💡 Pitch
Pitch: Make our autocomplete elements screenreader compatible
Michael Berger
One of the most significant impediments to using BC3 with a screenreader relates to our <bc-autocomplete>, <bc-content-filter>, and <bc-suggestion-select> elements.
These are many of the most fundamental elements used for interacting with Basecamp, all of which are broken for screen readers and/or keyboard navigation:
These are many of the most fundamental elements used for interacting with Basecamp, all of which are broken for screen readers and/or keyboard navigation:
- The to-do assignment picker
- Picking a person/people for sending a ping
-
The home page project filter(replaced 01/2022) - Picking a name for running a report
-
The jump menu(fixed 10/2018) - The @mention picker
Their current state makes it nearly impossible for someone with a screen reader to use Basecamp effectively. I think the best way to understand just how busted these things are is with a demo, so here goes!
<bc-autocomplete>
Here's a quick demo of the to-do assignment picker using Firefox and NVDA:
This applies as well to the <bc-autocomplete> we use for choosing someone to ping:
And when no matches exist we should communicate that both visually and aurally:
And when no matches exist we should communicate that both visually and aurally:
<bc-content-filter>
Another important element we use is <bc-content-filter>. Here's a demo from the home page project filter. Notice how when I tab around, the highlighted selection isn't spoken back. This is called a keyboard trap – an accessibility failure where the user is "trapped" in a section of the page from which there is no escape 🌀:
On the report picker, the screen reader repeats the contents of the input instead of the focused result:
Here's how that looks in Safari with VoiceOver. Spoiler: 😟 It's worse, especially with keyboard navigation. The expected arrow up/down navigation takes you out of the picker and when you tab, it doesn't announce the name of the selection
With quick-jump menu we got partially there! 🙌 In FF + NVDA screen reader, the project and team results are spoken back. But in Safari, issues remain with keeping focus within the modal (which is a separate issue), and navigation is unclear and inconsistent (tabbing vs. arrow keys).
On the report picker, the screen reader repeats the contents of the input instead of the focused result:
Here's how that looks in Safari with VoiceOver. Spoiler: 😟 It's worse, especially with keyboard navigation. The expected arrow up/down navigation takes you out of the picker and when you tab, it doesn't announce the name of the selection
With quick-jump menu we got partially there! 🙌 In FF + NVDA screen reader, the project and team results are spoken back. But in Safari, issues remain with keeping focus within the modal (which is a separate issue), and navigation is unclear and inconsistent (tabbing vs. arrow keys).
<bc-suggestion-select>
This is the @mention picker. This one behaves much like <bc-content-filter>, in that it's impossible to navigate or understand the selection that you're making. Just close your eyes and imagine this is what you were hearing 🙈
And again when no matches exist, this should be communicated:
And again when no matches exist, this should be communicated:
Here's a rundown of the main issues with these interactions and how they should work:
- It's not clear how to interact with the input (eg that it's a filter list, that you can @mention people, that you can assign to-dos to more than one person, etc.)
- The number of results in the list should be spoken back as you filter.
- The highlighted selection should be spoken as you move through the results.
- Upon making a selection, it should be spoken back.
- When no matches exist, this should be communicated both visually and aurally.
- Navigation should be consistent and based on standards. We currently use some mix of arrow and tab navigation. Instead, tab should always be an escape from the input, taking you back into the flow of the page, and arrows should be used to navigate results.
I recently came upon a pretty good example for an accessible autocomplete that includes a handy demo I think we could use as a basis for this work.
It's worth mentioning that even just some of these recommendations would be a step up from what we have now. If we can't get the number of results to be spoken consistently across browsers, at least communicating how to interact with the element, the selection that you've made, plus the ability to even make the selection using a keyboard, would be a considerable win.
I hope we can work this in as a small batch sometime soon. Thanks for reading!