Purple flowers in bloom with a sunset on the horizon behind them.
Skip to page content

Recommendations

The following recommendations outline actions that AI developers, deployers, and policymakers should consider implementing to address the opportunities and concerns raised in this research.

Ensure that all platforms that integrate AI are fully accessible to and usable by people with disabilities.

  • Design and develop web pages and applications that conform to the latest Web Content Accessibility Guidelines or similar accessibility standards. Ensure that updates are accessible at the time of launch.
  • Design AI-based systems with input from diverse people with disabilities both to ensure access directly to these technologies and to fully develop use-cases that may present outsized opportunities for people with disabilities.
  • Allow users to customize AI-enabled voice conversations for response time, speaking speed, and other factors like stuttering and the need to correct an input.
  • Integrate customization and fine-tuning for users through machine learning and user settings to improve the user experience while preserving control over data privacy and security. Clearly communicate to users the trade-offs between greater customization and data privacy.
  • Ensure that employment- and education-based AI and automation, including for hiring or testing, are fully accessible to people with disabilities and do not interfere with the use of assistive technology.
  • Build autonomous vehicles to be accessible to people with disabilities by ensuring that the human-machine interface, including communications, navigation, and physical buttons or kiosks, are fully accessible. In addition, users of mobility devices must be able to access the vehicle while wheelchair users must be able to secure their chair and use the restraint system.
  • Fully test innovative AI-based software (e.g. health apps) and hardware systems (e.g. delivery robots) with diverse users with disabilities to maximize access to information and environments and to minimize creating unintended barriers.
  • When appropriate, use AI for making documents and digital technologies accessible during the design, development, and deployment phases rather than relying on end users to make materials accessible with their own AI tools. However, AI may complement but should not replace human accessibility experts and usability testers.

Improve privacy and data security practices to increase trust in AI products and enable the use of AI with sensitive information.

  • Clearly communicate to users how their data is used by the AI developer, in model training, and by third parties without requiring users to understand complicated terms of service or privacy policies. For example, communicate in the chatbot interface how data is used and any changes to privacy policies.
  • Provide users with transparent, easy-to-find controls that allow users to meaningfully decide how their data inputs may be used.
  • Provide users with control over how their data is used to train or validate AI models.
  • Develop and deploy on-device AI products that allow users to benefit from AI while preserving their data and information on their own device. Allow users to easily switch between cloud and on-device processing or to permanently opt-into on-device data processing.
  • When handling sensitive information in a customer service or high-impact use-case (such as hiring portals), allow users to easily switch to a human agent.
  • Ensure data security in high-impact use cases by not integrating sensitive information or conversations into training data by default.

Improve the accuracy of AI outputs and provide users with clear expectations about the accuracy of these outputs.

  • Set appropriate user expectations of model capability to avoid overreliance on AI in highly sensitive contexts, such as transportation safety and visual interpretation of medical information.
  • Provide users with easy access to data sources and improve the explainability of model outputs to increase user trust and to facilitate an appropriate assessment of the degree to which users can rely on a given output.
  • Create tools that allow developers and deployers of AI to assess fairness for people with a variety of characteristics, including disabilities, and provide human users with a better understanding of why a model made a certain decision, for example in sorting job applications.
  • Actively train models that detect human speech to detect, understand, and respond to voices with greater diversity, including voice differences related to disability, regional dialects, and accents.
  • Provide users with ways to confirm AI model outputs, especially in contexts where it may be difficult to know whether an output is accurate, such as for blind users accessing AI-generated visual descriptions or using AI to improve document accessibility.

Ensure that AI used in high-impact areas is adequately trained, validated, and monitored to avoid inappropriate decision-making and outputs affecting people with disabilities and other groups.

  • Clearly disclose the use of AI in any high-impact setting, such as hiring portals, health insurance determinations, educational testing, and other contexts that affect users’ real-life opportunities. Identify to users whether decisions are made by an algorithm or by a human evaluator.
  • Integrate better disability awareness into therapy and health chatbots. AI therapy tools must be trained specifically to support the needs of people with disabilities and to minimize ableist language or language that may encourage self-harm.
  • Set limitations on chatbot outputs, and communicate the limits of the AI model, including by conveying uncertainty when appropriate.
  • Provide referrals to and information about human providers or organizations when the chatbot is unable to provide accurate information, especially in a medical context or when a user is seeking support for a disability.
  • Provide transparent safety data and clearly communicate responses and improvements needed when AI is used in risky contexts, such as autonomous vehicles.

Create more robust opportunities for users to develop skills using and deploying AI and to understand the limitations of AI.

  • Create more opportunities for individuals to expand their AI literacy skills through both traditional means (e.g. online courses and videos) and innovative methods of course delivery, such as directly in a chatbot interface.
  • Assist users in assessing the accuracy of AI outputs, including during setup of visual interpretation tools.
  • Complement the deployment of user control and privacy settings with information about where to find and how use these settings.
  • Employers should provide accessible training to use AI effectively on the job.
  • Ensure that unemployed individuals have equal access to affordable, high quality AI training.
  • Make AI training opportunities, including videos and interactive apps, fully accessible to people with disabilities.
  • Provide more widespread training on how to appropriately use AI to provide meaningful and accurate accessibility of documents and materials used in the workplace, educational settings, and other environments where people with disabilities often fall behind nondisabled peers due to inaccessible environments.

Maximize AI development to meet the specific access needs of people with disabilities.

  • Prioritize research and development to create tools that specifically benefit people with disabilities or that incorporate the access needs that people with disabilities have.
  • Invest in data collection and analysis as well as AI model development to improve pedestrian navigation and wayfinding, especially for users who are dependent on the accessibility of the pedestrian environment.
  • AV companies should work with AI navigation developers to improve and integrate tools that facilitate safe, accessible navigation to the vehicle and from the vehicle to final destination.
  • Improve the accuracy of accessibility-related AI uses, such as automated captions and visual descriptions.
  • Allow students to use AI for educational purposes and discrete access tasks while supporting students in understanding the difference between using AI tools for learning assistance and for cheating.
  • Allow employees to use AI for discrete access tasks while helping employees understand the privacy, data security, and business implications of using AI in the workplace.
Create more robust opportunities for users to develop skills using and deploying AI and to understand the limitations of AI.

Establish governmental guardrails and policies that promote fairness in high-impact use cases, mandate data privacy and security, and ensure accessibility for people with disabilities.

  • Provide guidance and issue enforceable guardrails that minimize the risk of discrimination in automated job testing, algorithmic hiring tools, algorithmic health benefits decision tools, and other AI models and algorithms that affect access to high-impact areas of life.
  • Invest in AI research that promotes accessibility and use-cases unique to people with disabilities, with the understanding that designing for people with disabilities sometimes leads to a “curb-cut” effect for people without disabilities.
  • Develop and implement standards that help developers produce safer, more inclusive, and fairer algorithms, automation, and AI models.
  • Require developers and deployers of AI to maximize data security, privacy, and transparency about how user data is handled, and require developers and deployers to provide clear user controls and disclosures about AI usage to affected individuals, especially in high-impact decision making.
Develop and implement standards that help developers produce safer, more inclusive, and fairer algorithms, automation, and AI models.