Tech Notes
It’s 2025, and for many, automation and artificial intelligence (AI) are an ordinary part of everyday life. From smart home routines to virtual assistants and chatbots, both automation and AI have proven to be quite versatile and handy in a multitude of ways. With that being said, it is also the case that automation and AI currently do not address or solve every problem on their own; this is especially true in the realm of digital accessibility. There’s this (false) concept that relying solely on automated testing tools or utilizing AI-powered accessibility overlays will guarantee that a website is accessible. A majority of the reasoning behind this idea is that automation and AI are extremely high tech, so they should be able to solve these issues with ease, right?
Well, not quite.
What’s the Deal With Automated Testing?
Automated accessibility testing tools examine the code and content that make up a website. This is run in juxtaposition to the World Wide Web Consortium’s (W3C) Web Content Accessibility Guidelines (WCAG), which consists of technical standards that ensure experiences on the web are accessible for all users. When issues are found, they are flagged as such by these tools. Oftentimes, the tool explains the issue and provides generic remediation suggestions, but none of the code or content on the site is altered.
While these automated tools can be extremely helpful in the testing process, they don’t catch every issue, and some prime examples are discussed below.
Keyboard Navigation and Interactivity
Users should be able to navigate to any interactable element on the page, such as links and buttons, via the keyboard. This ensures that users who rely solely on keyboard navigation can fully interact with the content on a page. Automated testing tools can detect when an interactive element has been improperly hidden from assistive technology via the aria-hidden=“true” attribute. On the other hand, they can miss instances where an element is improperly removed from the tab order of a page via the tabindex= “-1” attribute, as well as when a non-interactive element is unnecessarily included in the tab focus order with the tabindex= “0” attribute.
On Focus, On Input
When an interactive element receives keyboard focus, or a field receives input from the user, no change of content or context should occur (unless an accessible warning has been provided ahead of time). For instance, if the keyboard focus is moved to a button and that button automatically activates a popup, or if a form automatically submits when its last field receives input, these are issues. These issues, as jarring and problematic as they are, also happen to be undiscoverable by automated testing tools.
Element Purpose
All elements have a purpose, whether they be links, headings, input fields, etc. Automated testing tools can detect whether an element is missing a label, or if the given label is redundant, but they can’t determine whether the purpose of a specific element is properly conveyed. In other words, they can’t tell whether the given name or label makes sense out of context. For instance, if properly marked up, the simple heading called “Start” for an introductory paragraph on the history of apples would pass, even though it doesn’t make sense in or out of context. Similarly, if there’s a link that brings the user to a page about Corgis, the unique link name “bagels” would technically pass an automated test, even though it’s misleading and makes no sense given the context.
Input Assistance
The WCAG guideline 3.3 Input Assistance has been established to increase the likelihood that the user will notice any and all input errors made to assist them in understanding how they can correct these errors and to reduce the amount of irreversible errors made overall. It just so happens that violations of a majority of the Success Criterion within this guideline will go undetected by automated testing tools. For example, let’s imagine there is a form that initially asks for your email. When you put your email in, you forget the ‘@’ sign. There should now be a message on the screen, programmatically associated with that email field, identifying the error and suggesting a solution. Instead, this instructional text does not exist, and only a non-descriptive message simply stating “error” is provided. This alone violates two of the Success Criterion for this guideline, 3.3.1: Error Identification and 3.3.3: Error Suggestion, but automated testing tools are not yet programmed to discover issues of this nature, so they will go unreported by these tools.
Color Contrast
In many cases, automated testing tools will flag color contrast issues on a page. If the element is text-based, the tool will take into account the size of the text to ensure the contrast ratio it is tested against is accurate. Unfortunately, automated testing tools don’t assess the color contrast of other elements, such as images, svg’s used to represent UI elements, elements on mouse hover, and placeholder text on input fields. For instance, if an image is made up of low contrast text, this won’t be flagged by an automated testing tool. Similarly, if a link has low color contrast when hovered, this also won’t be detected as a problem.
Use of Color and Other Sensory Characteristics
It’s vital that color is not relied on to convey the meaning of elements (i.e. links are not distinguished only by their color) as this is completely inaccessible to users with limited color vision or color-blindness. Similarly, other sensory characteristics that require knowledge of the visual layout of the page (i.e. ‘on the left’ or ‘the square shaped element’) should not be solely relied on as these instructions are not accessible to users who are blind or have low vision. These are issues that also risk falling under the radar of automated testing tools.
Automated Testing in Conjunction With Manual Testing
As demonstrated time and time again in the previous examples, automated testing tools are not reliable when used on their own. With that being said, they are still immensely helpful tools when paired with manual testing. WAVE, a popular automated testing tool, includes the following notice on its WAVE “Help” page:
“WAVE is [a] suite of tools designed to help you make your web content more accessible. WAVE cannot tell you if your web content is accessible. Only a human can determine true accessibility. But, WAVE can help you, as a human, evaluate the accessibility of your web content.”
Axe DevTools, another popular automated testing tool, also rejects the concept that digital accessibility testing tools replace manual accessibility testing in its FAQ section. Overall, when paired with manual testing, these browser-based automated testing tools are a favorable pick to ensure a broad coverage of issues have been found.
What About AI?
AI is an undoubtedly advanced technology that is capable of many, many things. Considering it’s much more intelligent than non-AI driven automated testing tools, it must be able to identify and fix any accessibility issues it comes across without human intervention, right?
Also no.
AI-powered accessibility overlays are just a band-aid solution for a bigger problem. They don’t change the source code of a site, instead just emulating a semi-accessible, partially-usable experience and calling it a day. The underlying issues that these sites have still exist, and without fixing them directly, these sites will never truly be accessible, or usable, for that matter. This sheds a little more light as to why, unlike automated accessibility testing tools, AI-powered overlays are not well received. In fact, WebAIM’s Survey of Web Accessibility Practitioners reported that 67% of overall respondents and 72% of respondents with disabilities rated overlays and similar tools that automate accessibility changes in web pages as “not at all or not very effective.”
As discussed prior, there are an abundance of issues that automated accessibility testing tools currently do not catch on their own; the same goes for these AI-powered overlays. Specific instances of low color contrast and keyboard inaccessibility, just to name a few, will (and do) fall under the radar of AI-accessibility overlays, and thus go unfixed. These limitations prevent overlays from ensuring a website is WCAG compliant, even if their hosting site guarantees it.
It also just so happens that some of these public facing hosting sites are riddled with accessibility issues of their own!
With all of this information, it may not come as a surprise that the amount of overlay-based lawsuits has risen by 60%, and that 30% of all federal and state lawsuits involve these overlays, as reported in UsableNet's 2023 Digital Accessibility Lawsuit Report. All things considered, relying on AI-powered overlays to identify and fix accessibility issues is not sufficient in ensuring compliance, let alone an accessible and usable experience for all users, no matter how intelligent they may seem.
Final Thoughts
At this point in time, the only way to achieve an accessible and usable experience for all people is through people. When used in part with human intervention, automated accessibility testing tools have the power to make an extremely positive impact. AI-powered overlays, on the other hand, feed into this idea that technology can solve everything. This is a dangerously inhibiting idea that we simply cannot succumb to, especially when we know there are issues that even the most advanced technology just doesn’t have the capacity to ‘fix.’
Years down the line, when AI is capable of solving digital accessibility, we can then shift our focus to usability and how we can enhance the user experience. This topic is discussed in further detail in the next installment of this series, “Missing a Human Touch: What Happens When AI “Solves” Digital Accessibility?”
About the Author
Alexis Hubbard is Digital Accessibility Resident at the American Foundation for the Blind. Having graduated from the University at Buffalo with a B.S. in Computer Science, Alexis enjoys digging into the technical side of digital accessibility. Outside of Alexis’s creative endeavors, you might find them nerding out about an accessible component they created (and likely broke for the sake of demonstrating different fixes), or deep diving into various areas of research as they continue to learn how to make the web a more accessible and enjoyable place for all people.
About AFB Talent Lab
The AFB Talent Lab aims to meet the accessibility needs of the tech industry – and millions of people living with disabilities – through a unique combination of hands-on training, mentorship, and consulting services, created and developed by our own digital inclusion experts. To learn more about our internship and apprenticeship programs or our client services, please visit our website at www.afb.org/talentlab.