Purple flowers in bloom with a sunset on the horizon behind them.
Skip to page content

Executive Summary

For many Americans, artificial intelligence (AI) tools have become part of daily life. Like their counterparts without disabilities, Americans with disabilities can benefit from the support of AI tools. AI tools may be especially helpful for converting information from one form to another, such as turning pictures or audio into text or text to speech, promoting access and communication ease for people with disabilities. However, AI is new technology, and its risks related to bias, accessibility barriers, accuracy limitations, and privacy issues have not yet been fully explored.

In this study, researchers from the American Foundation for the Blind (AFB) surveyed 1,070 U.S. AI users with disabilities, along with 665 AI users without disabilities. The survey participants answered questions about how they learned to use AI and the types of AI they use, what they use it for, how helpful or unhelpful AI is in their lives, and what ideas they have to make the AI better. The participants also shared their experiences using autonomous vehicles (AVs) and beliefs about the privacy of AI versus humans in processing sensitive information. Other questions asked about experiences taking automated job screening assessments and encountering healthcare denials, which are increasingly driven by AI.

AI is new technology, and its risks related to bias, accessibility barriers, accuracy limitations, and privacy issues have not yet been fully explored.

Key Findings

  • Almost all participants learned to use AI on their own. Although only 9% of the participants learned through their employer or through an online course, 26% of the participants wished they could learn through their employer, and 37% said they would prefer to learn through an online course. More women than other genders learned AI through a friend or coworker showing them how to use it, whereas more men than other genders learned to use AI through videos.
  • Regardless of disability status, gender, age, and race, most participants used voice-activated AI assistants like Siri or Alexa. Two-thirds of working participants used AI in the workplace, and 53% of student participants used AI to support their learning. A larger share of younger participants used AI for notetaking and writing compared to older participants.
  • Almost half (44%) of disabled participants used AI for visual descriptions compared to just 26% of nondisabled participants. Men were likelier than other genders to use AI for visual descriptions. Additionally, 43% of disabled participants used AI captions, compared to 39% of nondisabled participants.
  • Participants with disabilities, especially blind and low-vision (BLV) participants, tended to use voice-activated AI for a wider range of tasks than participants without disabilities. For example, they used voice-activated AI to play games or music, check the news, and dictate text messages.
  • Working participants used AI for writing, notetaking, and research at equal rates across disability status, age, race, and gender. Disabled participants also used AI to make information more accessible, both at work and at school.
  • About 8% of participants used AI as a psychotherapy or mental health support. They appreciated the constant availability, anonymity, and perceived safety of AI psychotherapy, but also noted weaknesses such as overly “canned” responses and responses that could harm those with more complex mental health conditions.
  • More disabled than nondisabled participants reported accessibility barriers when learning to use AI. The most common barriers were inaccessible videos, material that was difficult to understand, and trainings that required use of a mouse.
  • While applying for jobs, 42% of job seekers had to take an automated test or interview. More disabled than nondisabled job seekers reported difficulties taking the automated test or interview, such as having to turn off assistive technology on their computer, needing to process inaccessible information, or being evaluated against neurotypical standards.
  • Disabled participants were almost three times as likely as nondisabled participants to report a healthcare denial in the past two years.
  • Among participants who rode in autonomous vehicles, only 49% of BLV riders said the ride was fully accessible, compared to 75% of sighted riders.
  • AI visual description users with disabilities thought the descriptions were more accurate than users without disabilities did, but AI caption users with disabilities thought AI captions were less accurate than users without disabilities did.
  • In a tech support scenario where participants would have to share private information like an account number, 73% of participants preferred a human over a chatbot, regardless of race or disability. In a tech support scenario without sensitive information shared, 60% of White and 52% of non-White participants would rather work with a human.
  • Participants tended to believe humans are more private than AI overall, especially White participants. Regardless of race, participants tended to value privacy over the independence and efficiency offered by AI. Specifically, 36% of disabled and 43% of nondisabled participants prioritized privacy protection over the benefits of using AI.
  • Participants expressed cautious optimism for AI’s potential to improve experiences for pedestrians and public transit users.

Summary of Recommendations

  • Ensure that all platforms that integrate AI are fully accessible to and usable by people with disabilities.
  • Improve privacy and data security practices to increase trust in AI products and enable the use of AI with sensitive information.
  • Improve the accuracy of AI outputs and provide users with clear expectations about the accuracy of these outputs.
  • Ensure that AI used in high-impact areas is adequately trained, validated, and monitored to avoid inappropriate decision-making and outputs affecting people with disabilities and other groups.
  • Create more robust opportunities for users to develop skills using and deploying AI and to understand the limitations of AI.
  • Maximize AI development to meet the specific access needs of people with disabilities.
  • Establish governmental guardrails and policies that promote fairness in high-impact use cases, mandate data privacy and security, and ensure accessibility for people with disabilities.
Disabled participants were almost three times as likely as nondisabled participants to report a healthcare denial in the past two years.