Purple flowers in bloom with a sunset on the horizon behind them.
Skip to page content

Aspirations for Future AI

Suggested Improvements to VAI

Three colleagues converse in a meeting room. One stands with his white cane in one arm. Beside him, a young woman sits holding a fidget toy next to a woman who is using a wheelchair.

Across responses, frustration regarding the shortfalls of voice-activated AI tools appeared frequently and often was combined with specific hopes about how to improve voice-activated assistants. For instance, about 45.5% of all participants voiced at least one explicit aspiration for how these tools should improve. That aspirational tone was especially common among BLV participants at 56.9% compared with 35.4% among sighted participants.

The most consistent aspiration was for more natural, competent communication (28.5% of all responses). Participants described wanting assistants to wait, listen, and respond in a way that fits typical human speech rather than forcing rigid command phrasing. One participant captured that conversational boundary goal clearly: “More options in defining a wake word. Better intuition in knowing when we humans have completed a question or request.” Alongside that, fixing speech recognition problems was a particular priority. One participant framed the ideal simply as improved listening behavior: “[h]elp it learn to distinguish similar-sounding words and help it learn to listen to the entire message before trying to respond.” Nondisabled participants also leaned into accent-related hopes, and one participant wrote, “Accent training for [AI]. Maybe I'm misheard because of my thick accent because I'm from around Pittsburgh.”

A second theme was reliability and personal control. About 11.4% of all responses called for reliability improvements. These hopes tended to focus on assistants behaving predictably, honoring user intent, and giving the user control over when and how the assistant responds. A participant put the personalization goal in concrete terms: “Is there a way to ensure that it only responds to my voice? […] I would like it to respond only to my voice.” Another participant emphasized that trust depends on clear repair and feedback when things fail, such as after the assistant mishears a command or executes the wrong command: “AI needs to give feedback when I want to for an action that did not work.” Accessibility-specific hopes were less frequent but still present, appearing in 2.4% of responses. One participant with a speech disability stated, “Voice activated systems should allow people with speech impairments like my stutter to take more time to speak if necessary. I get stuck on certain sounds, and I find if I pause too long, the system thinks I'm done and doesn't correctly execute the command I want.”

Finally, a smaller but meaningful slice of participants imagined improvements through better fit with real-world situations and routines (5.7% of responses). These hopes were often about assistants anticipating needs and performing well outside ideal environments. A participant described an “assistant that understands context” as the end goal: “I imagine a future where AI is aware of my personal circumstances […and] requests and anticipates the natural next steps of my requests.” Others emphasized performance in noisy settings and broader voice inclusivity, including one participant with physical and neurological disabilities who wrote, “Better background-noise cancellation when driving. Better recognition of women's speech.” Vocabulary and cultural fluency also appeared as a quality-of-life aspiration, with one nondisabled participant noting, “[t]he AI should have a better vocabulary that includes modern slang words.”

AI’s Future in Transportation

Regarding AI’s future in transportation, responses discussed broader hopes about AI in relation to transportation access for individuals and communities alike, safety through the removal of human error, and a possible rise in efficiency that could come from the objective perspective of an AI decisionmaker. Integration of AI into transportation sectors and technology, such as through AI traffic analysis along public transportation routes or adding AI accessibility features in all manner of transportation vehicles, was a strong trend amongst all participants. Many responses focused on AI as a way to make transportation more reachable and usable, especially through affordability, availability, and reduced friction (n=211; Disabled 52.1%; BLV 6%). Discussions centered around the opportunities for better routing, better service matching, and easier trip planning, with specific examples discussed by participants that integrate ease and access.

Disabled participants and a vast majority of BLV participants more often framed AI in the context of improving basic mobility through wayfinding, independence, and safer navigation, rather than as a novelty. Disabled participants emphasized AI’s potential to expand access by making navigation more usable and environments more interpretable through better AI descriptions of surroundings and better navigational directions. For example, one nondisabled participant suggested: “[...]future AI may help vision impaired citizens by providing them with audio messages at crosswalks.”

Many disabled participants also framed AI in transportation as promoting independence. Responses indicated a hope that AI could reduce reliance on others, enable spontaneous movement, and expand participation in social activities and appointments. As one BLV participant notes, “... all of these advances could mean much greater independence in traveling, which is very exciting.” Lastly, disabled participants shared hopes for AI to improve accessibility in transportation. They indicated that AI powered technologies, especially AVs, could reduce existing barriers like minimizing ride denials for guide dog users. They felt that AI could expand transportation availability if cost and accessibility are addressed.

However, participants across all demographics expressed concerns with the unknowns that AI presents, especially in an industry as important to daily life as transportation. Hopeful sentiment that discussed the desire for autonomy through AI-enabled transportation was often paralleled with the fear that human instinct and decision making could not be eliminated entirely, especially when it came to actual driving. Similar to other sectors of AI discussed, participants expressed a desire for guardrails and safety nets to be put in place. As one participant put it:

“I feel that as AI becomes more conversational and incorporates more aspects of a user’s situation, navigation-based cognitive load will decrease and more barriers to access can be circumvented. AI vehicles could offer safer experiences, both to those using them and to other drivers on the road, because they don’t suffer from the same perceptual biases that human drivers do. On the other hand, AI is new and there is no crucible like the real world. I’m not confident that we understand all the pitfalls of deeply incorporating AI into existing transit systems.”

Suggestions for Improving Privacy of AI

Several themes related to improving AI privacy emerged across participant responses. Broadly, participants’ aspirations and fears centered on transparency, control of data, consent, and a desire for stronger guardrails to protect privacy. More disabled than nondisabled participants discussed privacy-related aspirations, suggesting that while privacy is important to all users, people with disabilities may spend more time thinking about its implications.

Participants consistently expressed a desire for transparency regarding what happens to the data they provide to AI systems. Specifically, they wanted clear disclosure about what data is collected, how it is used, and whether it is used to train models. Participants also emphasized the importance of having meaningful control over their data, including the ability to opt out of data use and to delete previously collected information. One participant summarized this theme succinctly: “AI should be transparent about what information is collected and give users an option to delete their information.” Some participants went further, expressing a preference for data collection and use to be turned off by default. Participants also highlighted the importance of knowing where to locate this information and these controls, emphasizing that privacy details and settings should not be “buried” in documents such as terms of service.

Participants further noted that limited privacy protections may have downstream effects. When asked about AI policies in their workplaces, some participants reported that their jobs disallowed the use of AI tools due to privacy concerns. At the same time, these participants asserted that if such tools were permitted, they would meaningfully increase work functioning. This finding begins to illustrate why privacy discourse is not only a matter of individual concern but also a critical factor in access, participation, and employment outcomes.

Final Participant Comments

Participants’ hopes for improving AI systems were largely forward-looking and pragmatic, emphasizing better fit with real human needs rather than entirely new or speculative capabilities. Across responses, many participants expressed a desire for AI that adapts more effectively to individual users, particularly in how it recognizes different communication styles, languages, and accents, and how smoothly it works across everyday contexts and devices. This aspiration appeared broadly across the sample, but it was especially pronounced among BLV participants, who more often framed adaptability and customization as foundational to usability rather than as optional enhancements. One participant articulated this vision succinctly, explaining that AI would work best if it could “adapt more to individual users, such as learning my preferences, understanding different languages or accents better, and working smoothly across devices,” while also being faster, more accurate, easier to customize, and more transparent about data use.

Alongside these usability aspirations, participants repeatedly emphasized the importance of trust, particularly through transparency and human oversight. While many saw AI as a powerful and helpful tool, there was a consistent call for clearer boundaries around where AI should be relied upon and where human review should remain central. This concern surfaced across disabled and nondisabled respondents, often grounded in lived experiences of harm or near-harm when AI systems were treated as final decision-makers. One participant described contesting a healthcare claim that had initially been denied by an automated system and noted that a human reviewer ultimately approved it, reflecting a broader sentiment that AI should support, not replace, human judgment in high-stakes contexts. As they put it, relying on AI “far too much, way too soon” can have significant consequences for individuals, especially when decisions affect access to essential services.

Participants also framed improvement as a matter of balance rather than rejection. Many described meaningful ways AI already fits into their lives, particularly as a starting point, organizational aid, or emotional outlet, while still recognizing its limitations. This framing was common across groups and often carried a tone of cautious appreciation rather than enthusiasm or fear. One participant explained that they find value in AI summarizing information, helping them vent without straining personal relationships, and outlining travel plans, while also recognizing that it “lacks a human element” and should not be relied on completely. In this view, AI is most useful when it helps people get oriented, think through options, or release pressure, not when it replaces human relationships or judgment.

Differences between disabled and nondisabled participants were most visible in how aspirations were framed. Nondisabled participants more often focused on convenience, efficiency, and smoother interactions, while disabled participants were more likely to connect improvement directly to access, independence, and control. For disabled participants, reliability, customization, and clarity were not simply quality improvements but prerequisites for meaningful use. Requests for better speech understanding, clearer feedback when something goes wrong, and stronger control over how and when AI responds reflect a desire to reduce friction and uncertainty in tools they may rely on more heavily.

Taken together, participants’ aspirations point toward an AI future that is less about novelty and more about refinement. The improvements people want are grounded in everyday realities: systems that listen better, adapt to diverse users, explain themselves clearly, respect boundaries, and remain accountable when the stakes are high. Rather than calling for AI to do everything, participants repeatedly emphasized the value of AI that knows its role and performs it well.

Taken together, participants’ aspirations point toward an AI future that is less about novelty and more about refinement.