AccessWorld Podcast, Episode 30: EXTRA! EXTRA! New AFB AI Study Released
Episode Notes
In this episode of Accessworld, a podcast on digital inclusion and accessibility, Aaron and Tony talk with the research team at AFB’s Public Policy and Research Institute on their latest round of research found in the new report released this week: “The AI Quagmire: Benefits, Risks, and Aspirations of AI Through a Disability Lens.”
You can access the report at: www.afb.org/AIResearch2.
AccessWorld Podcast, Episode 30 Transcript:
V/O:
AFB.
Intro:
You're listening to Access World, a podcast on digital inclusion and accessibility. Access World is a production of the American Foundation for the Blind. Learn more at www.afb.org/aw.
Tony Stephens:
And welcome back everybody to Access World Podcast, a podcast on digital inclusion and accessibility from the American Foundation for the Blind. I'm your co-host, Tony Stevens with the American Foundation for the Blind. And join with me as always today is Mr. Aaron Priest coming back from CSUN in Anaheim, California. Hey, Aaron.
Aaron Preece:
Hey, how's it going?
Tony Stephens:
It is going well. We're looking forward to getting more updates on another podcast around all the things that happened while you were out in California. But also this week, we're very excited for a special episode of Access World because we are joined by the mighty team of the Public Policy and Research Institute here at AFB who have this week released a major study as part phase two on their AI research. This study has been going on. They started it about a year ago, and we are joined by pretty much the whole team, I feel like. We have an amazing group of knowledge here. Their IQs are going to stack thousand times above ours, I think, Aaron, today. But let's get
Intro:
Started
Tony Stephens:
With some introductions. So we're going to introduce first our Chief Public Policy and Research Office, sir, with the Public Policy and Research Institute. And Stephanie, we'll go to you and then get a chance to hear from everybody else who's on our podcast today.
Stephanie Enyart:
Hi, this is Stephanie Enyart.
Arielle Silverman:
Hi, this is Arielle Silverman, director of research.
Sarahelizabeth Baguhn:
Hi, this is Sarahelizabeth Baguhn, research specialist.
Angie Whistler:
Hi, this is Angie Whisler, researcher.
Alyssa Shock:
Hi, this is Alyssa Shock. I'm a research fellow.
Carmel Heydarian:
Hi there. It's Caramel Heydarian. I'm a research fellow.
Sarah Malaier:
And I'm Sarah Malaier. I'm senior advisor for public policy and research.
Tony Stephens:
And a mighty team it is. Thank you all so much for getting together with us. It's very exciting with the report coming out this week from the American Foundation for the Blind. Congratulations on the second phase of the research that you all have been undertaking for well over a year now. All right, let's jump in with you first and really just tell us a little bit about what this research, because I know you've been sort of leading the team in this effort, but tell us a little bit about this study that's coming out and how is it different from the first phase of research that was released last year?
Arielle Silverman:
Yeah, absolutely. So we published a study about a year ago that was an expert consensus study. So it was asking experts who work in AI or who study the policy implications of AI to predict what kinds of issues might come up when we're thinking about AI and people with disabilities. And these experts gave us a lot to think about. They had some optimism about ways that AI can help people with disabilities thrive as an assistive tool. And they also had some concerns about potential risks of AI, such as discrimination in employment or in healthcare, concerns about privacy or lack thereof when it comes to AI. And so they brought up some really important issues, but we didn't yet have the perspective from people who are actually using AI as consumers. So the goal of this study was to compare and contrast how people with and without disabilities are using AI in 2025, what kinds of barriers people with disabilities might be experiencing either who want to use AI or who are confronted with automation in job seeking, for example, does that present any barriers specifically for people with disabilities?
And we wanted to know a lot about people's opinions about whether they felt like AI was private enough, if they could trust it, how it compared to working with humans, how they felt about the directions that AI could go in in the future. So this was a really broad cross-sectional survey study. And we believe one of the only groups to have a sample of over a thousand people with various types of disabilities, so not just blindness and low vision, but also neurodiversity and mental health disabilities and hearing and speech and all different sorts of disabilities to weigh in. And then we also had a comparison group of 665 people who don't have disabilities. So we could try to see what is disability specific and what is kind of a general pattern that everybody is experiencing with AI. So over the next little bit, we're going to share some findings with y'all about what we found and what it means for the future of AI.
Tony Stephens:
Well, thanks so much, Ariel, for sharing that because it really is great to see this sort of build itself out from the initial phase one of the research. I'll turn to Dr. Bagun. Sarah Elizabeth, I know you were diving into the data as much, if not more than so many other people in the space.What were the most exciting things? What were the key things that really jumped out at this phase two study research and really jumped out at you all? What was the wow?
Sarahelizabeth Baguhn:
It's always so hard to choose because we asked so many people, so many questions that we have just sort of findings all over the spectrum of AI. And it has been very much the season of two things can be true at once as we find findings in one group that are different than other groups and also a lot of findings that are the same, that how people want to use AI and what people are experiencing with AI can be pretty consistent. I think one of the ones that stands out to me is around privacy. We asked privacy questions in a variety of ways, looking at how people value privacy with dealing with AI. Would they rather talk to a human or a chatbot if they needed technical support? And for folks with disabilities, how much would they be willing to sacrifice on privacy to get at independence and efficiency?
And we did see a significant difference between the groups on that independence efficiency on one end scale, total privacy on the other, that folks with disabilities were more willing to give up some measure of privacy if it got them access to an assistive technology product that could make them more independent or more efficient with the task. That was even more pronounced in the blind and low vision group, certainly an area where we're accustomed to having human readers and working with people to get access. Folks are more likely to say, "Hey, give me a little AI image detection that can read for me. I just want to have access on the fly and not have to wait until I'm with a sighted person who can read text to me. " So we really saw that there are a lot more motivators and pressures on people who are blind, have low vision or have other disabilities to sacrifice privacy that they'd overall want.
But then when we dive into that even deeper, we ask people, "How would you feel about that when we're talking about sensitive data? What if you had to give an account number? What if you had to give your social security number?" And sort of dug into that question with how would you handle it if the AI was cloud-based and your data might be visible to the company that makes the product versus what if it's just on your device and it didn't store anything or save anything and you knew the session would be dumped at the end of your query. And for 73% of people across the board would still want to use a human for sensitive data or would go on to in some way prioritize that privacy of on- device processing and really having control over their data if the data is sensitive. So even though people are feeling this pressure, there's really a huge and strong value for our privacy to be protected when using AI products.
Aaron Preece:
So in the survey, you covered both people with disabilities and people without disabilities. I'm curious how they differed in their responses and also where they were aligned.
Angie Whistler:
Yeah, this is Angie and I can take that one. So we saw some incredible similarities between the two groups. Specifically, we found that participants, regardless of disability status, used AI in similar ways for employment and education. Everyone is using it for note-taking, is using it to unhelp productivity in some ways, as well as there's a lot of use for voice activated assistance. We saw that just like for reporting, for weather status, for the day-to-day tasks, regardless of disability status, folks are using voice activated assistance for a level of convenience, but I think that's where the difference comes in play. Sarah Elizabeth already kind of alluded to this with the privacy question, but across the board, disabled participants used AI as not just a novelty, but as something of a tool. It was more than just a means to do something that they had already done in the past.
It was a opportunity for access and independence. We saw this specifically with our results about autonomous vehicles. While there was some hesitation from non-disabled participants about AVs and what it could mean for transportation workers and dangers on the road, for disabled participants, specifically blind and low vision participants, it offered a opportunity for independence and freedom of movement that is not possible in the current transportation infrastructure in the same way.
But regardless of that option, there was also the assistive use of AI. So unlike non-disabled participants, disabled participants were using AI as an assistive technology. They were using it in order to not just have an additional tool like non-disabled participants, but be able to expand their current reach and get access to systems that they didn't have in place already. AI by and large has been built for a lot of mainstream, but disabled participants have been able to use it regardless of some accessibility barriers to catch up almost, and reduce some of the friction that comes in other technology systems.
Tony Stephens:
Well, Aaron and I have definitely talked a lot about that on the Access World podcast, haven't we, Erin? I mean, it's been one of those things where it's kind of life hackers and AI has been an enormous life hacking tool.
Aaron Preece:
For
Tony Stephens:
Sure.
I want to go back on the idea around this, kind of with the privacy, but also with this idea of the trust they all mentioned earlier on, the humans versus AI, we hear often about the battle of the bots that's out there. And it was really interesting going through the data to sort of see that where people settle in on trusting it and how that varies between the disabled and the non-disabled populations, those of us who are people who are blind or have low vision or other disabilities, and then the general non-disabled population. Can one of y'all break down this idea of this sort of human versus AI and what the data showed into how people are trusting it?
Arielle Silverman:
Sure. This is Ariel, and I know in our team, we like to joke that this segment was called, who would you trust with your social security number, your drunk uncle or AI? Because of course, sometimes humans are not always to be trusted either with sensitive information. We tested this in a few different ways, and one of the biggest findings was that regardless of disability status or any other characteristic, most people would still prefer to work with a human rather than AI if sensitive information is involved like a social security number or an account number. 73% of the participants said that they would rather get help from a person with that kind of information rather than getting help from an AI chatbot. We also asked some questions specifically of people who use AI for visual description, whether they would use AI for reading documents if the AI transmitted the information to the cloud versus if it did not save or upload any information.
And not surprisingly, people were much more open to the idea of using AI for reading sensitive information if it did not save or upload any information than if it did do those things. We also asked some questions about whether people thought AI was more or less private than humans overall. And although there were definitely some varied opinions and a good chunk of people thought that they were equally private, more people said that AI was less private than humans than the reverse. A few people thought AI was more private than humans, but more people thought AI was less private than humans. And we also asked people to think about the trade-off between getting independence or efficiency by using AI versus giving up some of their privacy. And if someone was forced to make a choice between those two things, which would they choose? And again, there was definitely a diversity of opinions, but 36% of people with disabilities and 43% of people without disabilities said that they would prioritize their privacy over the benefits that they would gain from using AI.
So that trend was a little bit weaker for people with disabilities, but it was still present for both groups. And finally, we found a couple interesting differences based on race that we definitely want to explore more, but there was a slight trend for people who were not white to be maybe a little more open to using AI. We had a question about whether you would prefer to use humans or AI for tech support, not with any sensitive information shared, but just generally. And 60% of the white participants compared to 52% of the non-white participants said that they would rather work with a human. The non-white participants were also slightly less likely to say that AI was less private than humans. So in other words, it's possible that because of racial discrimination kinds of experiences, people from racial and ethnic minority backgrounds might be less trusting of humans and more willing to give AI a chance to support them.
But that's definitely something that we want to explore more. And certainly what we've learned from these data is that people with disabilities, we still care about our privacy, even though sometimes we have been forced to give up privacy in order to have access. It's still a priority to be sure that the privacy of information is protected by AI tools, but it will be useful in the future to do more research, to understand more specifically how people are thinking about AI and how that changes over time as the AI changes.
Tony Stephens:
I'm fascinating looking at the psychology behind it, what's going on inside our brains as we're dealing with this artificial brain.
Aaron Preece:
And speaking of psychology and privacy, one of a really interesting find is on the mental health applications for AI. And I know that's really a pretty hot button issue in the world at large right now, and you had some interesting findings there. Can you unpack those more? And is there potential for people with disabilities to benefit from that, especially with the potential in general for people with disabilities to potentially be more isolated than people without disabilities?
Alyssa Shock:
Sure. I'm Alyssa Shock. I will take this one. So it's funny because when we first started asking questions about this, we were asking about therapy specialized AIs for the most part. And then people were coming in and saying, "Hey, I'm using these general chatbots for therapy." So we said, "This is an important conversation. And as you said, yes, this is a very big part of the current discourse." And we did find some findings about the benefits that could possibly be perceived benefits of therapy and different harms that could come up or areas where we have to ask some more questions. So one of the things that we found though is that most people who are using an AI in some kind of mental health context, because an AI will never be a therapist that is important, but people who are using AI in a mental health context are doing it not to replace therapy, but to perhaps supplement therapy.
Most of them have had a human therapist before. There are some who have not, but most have. And what we found that people liked about it was it was available for venting, for example. People were able to just kind of use it to get things off their chest, to kind of get out whatever is stressing them out. We also found that people were using it, like for example, they liked the fact that it could help them organize their thoughts and feelings, know what they're feeling, understand themselves more. And also that they were using it, it's helping them understand alternative perspectives, right? Emotional perspectives, like say that you're in some kind of relationship situation, for example, and you want to understand how the other person is feeling. You're like, "I'm so mad at my ex. " Well, you want to empathize with your ex a little bit more.
It would help with that. It could also help with CBT skills, that's cognitive behavior therapy type things, helping you change your thoughts and behaviors and feelings in healthy ways that they wanted to. So those are some of the benefits that people were finding. And also there's this lack of social cost, this idea that it can be neutral, almost non-judgmental neutral. There's anonymity that you wouldn't have with a human, but that could connect to the AI and privacy findings, right? And also this idea of like maybe lack of prejudice, but there are some things that were coming up that we have some questions still about. And that's really important because these are coming up in AI discourse right now and how people are using it in mental health. We found that it could indeed maybe escalate some crisis situations or say things that are dangerous and inappropriate.
And we have some questions about wanting to make sure that that's not the case. And also, we wanted to have cultural competence to be able to take into account people's identities, race, age, gender, disability, other characteristics, and be able to use that and create adaptive responses. So we know people are going to be using this and we want them to be able to use this in the way that's going to support them and not cause them unwanted harms or harms in doing so. So there's still a lot of open questions and a lot of nuance that we want to take this into account. And in terms of people with disabilities, right, like for example, if we look at ... We know that people with disabilities can be more isolated and we don't want this to create more isolation. We want this to help with isolation.
So I think of it like screen time, for example. We know that there's such thing as a good use of screens, right? Good use of technology where it's helping people and empowering people, but people who are just on TikTok all day Zoom scrolling aren't necessarily going to feel very good and it could be kind of not great for their mental health. So it's like that, right? It's all in how we use it. And we want to encourage and empower people to use these in the right ways that are going to support their mental health while knowing it's also not a licensed practitioner.
Tony Stephens:
That's extremely interesting. I mean, just the idea that there's sort of an anonymity in that sense like that, I'm Irish Catholic. So the idea of the confessor, right? This neutral party that you share everything with goes kind of back to that idea of the trust in the other conversation, but it's interesting to see how people play that out. At the same time, I know there's probably some risks as well, right?
Alyssa Shock:
Yeah.
Tony Stephens:
Yeah.
Alyssa Shock:
It's being used in ... If you have a training data turned on, these conversations can be used that are supposed to be anonymous in training data, but you want to take, I'd say, take into account privacy as well if you're going to use this.
Tony Stephens:
Shifting gears a little, one of the areas that I also found interesting and was kind of surprising with this because AI hasn't been out that long in terms of like the ChatGPT world, like where everybody now is taking AI and they'll take these platforms like OpenAI or Anthropic or Gemini and Google, and then they start creating ways that they can use it for different things, employment, the way people are using it for the hiring process and things like that. And your dive into around employment and the use of AI in that space, because that's been one of the issues when we talk about prejudice or the concerns that people with disabilities face is that constant sort of battle that's faced of when you walk into an office setting for an interview and suddenly there's the surprise that, oh, you have a dog or you're in a wheelchair and those preconceived notions.
It was interesting to see that already, I think it was 42% of the respondents said in their job search, they're already encountering AI. What's playing out here in terms of, is this good? Is this bad? And having AI and sort of this process to help move us through the employment space for people with disabilities.
Sarahelizabeth Baguhn:
It's a very interesting situation. You can understand why companies want to employ algorithms up to and including AI models to try and filter through the stacks of applications they might receive so they can be more efficient with their time, so they can find that diamond in the stack that they want to hire and onboard. And so there's a real motivation to optimize efficiency in this space, and that's driving a lot of very fast AI adoption. We have some concerns that it's very hard to see transparently what's happening when AI algorithms start filtering. It's not quite like a manually set up filter where the HR team or the hiring team has specified everything that the AI is going to consider. And we don't really have a lot of transparency into what it's waiting for, how important different traits are as it does those automated judgments. And so it feels a little bit risky to have this black box operations happening and no opportunity to see what's going on.
I think philosophically, we have to think a little bit about how bias is distributed in our processes. We know that when humans are making the hiring decisions, as much as we might try and be reflective and conscious of the implicit biases we might hold, if people do come to those tables with some amount of bias happening. And so if you think you've got a team of 10 people reviewing applications, some of them might be biased, maybe even a majority of them have biases against hiring people with disabilities, but you also have some people on the team who don't and who can kind of hold the line on that and say, wait, why did you discard that candidate? Is it just their disability? If you move to an AI system that is filtering all of your applications, if that AI system is biased, now instead of 10 people with an assortment of bias levels, you have one level of bias being applied to everyone.
And so you might trade for a system and have it become much more biased than the team of humans you had doing the work. I think in a utopia, we also could trade for a system that does a better job of regulating for its internal biases from the data it was trained on and does a fairer job. And so we have this concern that people buy the AI system and implement it and deploy it hoping for or counting on an improvement in how bias is applied, but it's really a roll of the dice and you won't know until later what's been happening. And that's a tricky thing for companies to deal with.
Tony Stephens:
Like with humans then. So what you're saying then is if you have 10 hiring managers at a major company, let's say, two of them are jerks, they don't like dogs maybe. But in this case, if it's biased, then you have 10 people that all hate dogs. Is that kind of the idea? It's all done. Interesting.
Sarahelizabeth Baguhn:
It's all or nothing. It can really shift your table out of balance and it also is very opaque. So you can ask the hiring managers, how do you feel about dogs? And they might lie about it or obfuscate it, but you also might have a way to read the person in front of you and tell. With AI, you don't have that option. You don't get to know the person over the course of things. And if the model changes in six months, it's no longer the same model you bought, but you're locked into a five-year subscription, how are you going to navigate that as a deployer?
Stephanie Enyart:
This is Stephanie, and I want to just jump in and say, I mean, you've done a great job of really highlighting what those access issues, and I mean that transparency kind of access that an employer will wrestle with not understanding truly what this algorithm and then of course its outputs, what it's doing and how it may or may not be changing in the background with a long-lasting contract with the developer. There's a lot of issues that are complicated related to those power dynamics there. But when you think about it from another perspective, you'll have an entity that's deploying something with an AI tool embedded also be on the hook for legal liability. And our liability landscape from a legal perspective is an area that will also need to mature and grow because a developer and a deployer are going to be in really different positions.
If, for example, a particular company that's deployed an AI embedded tool into their processes gets sued because apparently the algorithm is increasingly discriminatory. How can we sort all of this out from a legal perspective? We have a landscape that as well needs to mature, like our shifting environment is maturing. So it gets very complicated when you have deployers using tools that they really have little transparency and visibility into, little ability to understand changes going on, but yet they may be on the hook. So it's going to be an interesting ride here.
Tony Stephens:
Going through the study and some more thoughts around access, literally wanting to shift gears with this, literally and figuratively, I guess, transportation. When we talk about access as well, access to employment opportunities, but also too, just access in our communities. So that's where a lot of folks think of AI. They'll think of autonomous vehicles because they're starting to pop up everywhere in cities around the country now. And that's I think one of the more visible pieces of AI that's out in our community in certain parts of the country. But the report and study that you all dove into had a number of things around transportation, environmental access, key things like that. What were some of the most interesting findings and anything that really jumped out in terms of transportation as well? Because that's a huge part of where we're going with the AI revolution.
Carmel Heydarian:
Yeah. So this is Caramel. I would love to take that question, Tony. So I just wanted to emphasize my favorite part of our transportation findings, and that's that overall in the general sample, everyone had high support for public transit, and that was above 80%, I think. So that was pretty high support overall for public transit. That's
Tony Stephens:
Kind of some of the highest findings in the whole survey, right? Transportation was, yeah.
Carmel Heydarian:
Exactly. And I think Tony, again, one of the most interesting things from that is obviously we see more support from people with Disabilities and people without disabilities, but again, this is all over 80%. But even people who were drivers, 84% of them were like, "We think that public transit is extremely important." That's incredible. That's very high numbers. So looking at other applications of AI in transportation is really interesting. So I did a little bit more research because I'm really into this whole transportation thing. So I was looking at an article the other day, I think it was from the Journal of Transportation and Land Use or something. And they were talking about these researchers had used AI to determine median lengths of crosswalks. So we were seeing ranges from 30 feet to 78 feet. And I was reading in their discussion section that this is really suggesting these long 78 feet crosswalks is such distance to traverse and just a little amount of time.
It's really suggesting that city planning is kind of going towards car-centric instead of people centric, which is really concerning. I was also looking at other AI tools out there. Some of these urban AI integration tools, I don't think they're widely deployed, but they are being tested in cities and they do things like, oh, they can optimize traffic lights or they can detect just faulty infrastructure, like damaged sidewalks or where sidewalks just kind of end. And then there's street and you're like, "Where do I go now?" Those kinds of things. And it was neat that these tools were saying, "We want to be proactive and predictive, not reactive." So they don't want to be reacting to crash data. They want to be using these systems instead of relying on someone's fatality or someone's terrible injury. They want to be focusing on like, "Okay, we can detect near misses.
This person almost got hit by a car or this many people almost got hit by cars at this intersection or whatever. Or this stoplight or this stop sign, people run it all the time and they just have that data." And the point is to have it be available to cities. So they can use that for grants or they can just use that to speak more informed on, "Okay, this is a real problem. Streets and maintenance, we need to figure it out. " And then another application I thought was interesting was AI to AI communication. So they were saying these infrastructure, perhaps they're in crosswalks or the signals or in the lights and they can signal to AVs that are approaching, "Hey, there's a massive amount of people that are still crossing. You just need to chill or this person is still crossing, watch out. " So I thought that was really cool.
And then just rounding it all out, they can extend walk signals. If these are smart systems, they'll be able to be like, "Okay, this person needs more time." So alerting cars of that. And then just supporting city efforts, like I said. And then one of the cautionary things from all this was you need to be as transparent as possible because if you're using these things in neighborhoods or deploying them, the neighbors need to know we can't be using facial recognition. None of that because that's not the intention here. The surveillance is not the intention. It's more like surveillance of like, okay, this was a near miss, this was almost a crash, that kind of thing. So just rounding it all out, the priorities should always be the people and not the cars.
Tony Stephens:
And one of the arguments with the autonomous vehicles early on was the idea that they wanted to make them electric, so there was a lot of lobbying around zero emissions. They wanted it to be smart where they're all thinking together. The idea if all the cars are AI on the road, it's zero congestion. But then too, the zero fatalities, right? Right. I mean, I'm someone that's been hit four times by cars. Oh
Carmel Heydarian:
My God.
Tony Stephens:
And it's interesting where you get that sort of idea of how it can be used. And I myself personally will trust autonomous vehicles more than three of those times were distracted drivers each time. And it's so interesting around that. With the AV space, which explains knowing that, Sarah, because you got to ride with me the first time we rode in an AV together, God was it almost two years ago, I guess, in Washington DC. Those are all people on the streets, how it's smart on the streets. There is a lot of conversations. I know amongst the blind and low vision community, Aaron and I all attest to this, as those that use guide dogs
In terms of the space where the constant denials by humans, trying to get into a ride share with a guide dog or a taxi cab happens all the time. And the idea that these vehicles in and of themselves, they can be safer on our streets to some degree if they're working effectively and using that kind of data like you were talking about Caramel, but just that sense of liberation. Was there anything in the study that excites those of us like Aaron and I that have guide dogs as well that reaffirms this idea that the AV world in this world of transportation has some of that benefits? I mean, Sarah, I know that you work a lot in the AV space here in Washington, DC. Anything that gets us excited? Are we the only ones that feel that way?
Sarah Malaier:
Yeah, I mean, this is a really interesting space and you are definitely not alone in the blind community thinking about the importance of AVs. In fact, we've found that 74% of the blind respondents thought that developing AVs is important, and that's compared to only 41% of cited participants. So there's a real extra interest in AV development among people who are blind, and that maybe that's because blind people are pretty consistently non-drivers. And there's also this finding that non-drivers are more likely to believe that AV development is important than drivers are. So I think where you have a significant need for transportation, there's also a significant interest in AVs, but I think it's worth also looking at, so what's the actual experience when people get into the car? We hadn't had a lot of participants who actually had the opportunity to use an AV. So if the 35 blind participants who used an AV, only 49% said the experience was fully accessible to them compared to 75% of cited riders.
So I see that as a problem. You've got the people who are most enthusiastic about it are getting less of a good experience out of it. And digging a little deeper, what's the cause for that? I mean, it's what you could expect. It's finding the vehicle. It's the lack of good auditory cues. It's the safety of the pickup and drop-off zone and navigation to and from. So those are some of the big things that were raised. But we also noted that only nine of the respondents with physical disabilities of any kind reported having ridden in an AV, and that may be because a lot of people with physical disabilities who use mobility devices just can't use AVs because most of them don't have ramps or wheelchair securement. So that's something that we're looking at from a policy perspective is how do we get to vehicles that are more accessible?
And that's both the human machine interface and it's the wheelchair accessibility.
And a drumbeat that we hit all the time is AVs really have a lot of potential for people. We see the vision of independence and freedom and the ability to go where you want and when you want and by yourself, that's a big one, not having those guide dog denials, but it has to be accessible. You have to be able to do the way finding without human support. The kiosks in the vehicle need to be accessible and somebody who uses a wheelchair needs to be able to get into the vehicle, needs to be able to secure themselves independently because we don't want this to just be something that works for a small group of people.
Tony Stephens:
There's
Sarah Malaier:
Been
Tony Stephens:
A lot of challenges throughout the whole rideshare too, the accessibility for people liking wheelchairs and things like that since it really hit the ground. But yeah, you were saying, go ahead. Yeah.
Sarah Malaier:
Right, right. It really hasn't been a priority across the automobile industry. It is very much an aftermarket experience, not a intentional design upfront, which is where it would be less costly. But speaking of cost, I think that there were two other concerns that came up in the study, and that's the cost of AVs and the availability. We've done some other analyses in other research projects where we dumped the financial pressure of being disabled as the disability squeeze. And I think there are a lot of questions among the respondent center surveys about the cost of AVs. Only 36% of AV riders, so they could afford to use an AV whenever they wanted to or needed to based on how much the ride cost when they tried it out. And as many as 18% thought that the cost was so prohibitive that they were unlikely to use an AV again.
And in a similar vein, if AVs only run in San Francisco and Austin and Washington, DC or wherever, that's not the whole country. And we know that people with disabilities live in every community. So a lot of enthusiasm, but we need to be able to trust the vehicles, we need to be able to access them, afford them, and have them available.
Aaron Preece:
One of the things that was interesting is that there were some gender differences in the way people were using the AI, but also specifically in the way people were learning to use AI. Could you speak more to that? And if that's ... I noticed, at least in the part I saw, it was how people were learning, but was there also a learning preference difference depending on gender?
Arielle Silverman:
So we asked people how they learned to use AI and they could choose multiple ways if they learned in multiple ways. And then we also asked people how would they ideally prefer to learn to use AI. And we actually found that the gender patterns were similar for the two questions. So both in terms of how people have learned and how people preferred to learn. Regardless of gender, almost everybody said that they learned to use AI by playing with it on their own. So a lot of people learned without any formal instruction, and it was relatively rare to have learned on the job or through an online course or any kind of structured learning method. But we did find that compared to other genders, women were likelier to report that they learned by having friends and coworkers show them. And men compared to other genders were most likely to report that they learned by watching videos.
And the same patterns arose in how they preferred to learn. So women preferred to learn through their friends or coworkers, and men preferred to learn by watching videos. And so this has some interesting implications for those who might be creating AI curricula in terms of being sure that the curriculum meets the needs and preferences of people of all genders.
Tony Stephens:
Super interesting. So I mean, does that track with just in general, I mean, gender differences between the way people learn? I mean, there's always the stereotype that men think with their eyes too much.
Arielle Silverman:
Yeah. I mean, I'm a psychologist, but I'm certainly not an expert on all the science around gender differences in learning, but certainly at least the stereotype is that women prefer to do things socially and learn things through peers and that the visual maybe processing is stronger for men. But I can't speak definitively on that because that's not my area of expertise.
Tony Stephens:
Yeah. Yeah. No, fair enough. Fair enough. But I think we build into our own social stereotypes and everything around the world. So that's me probably just going down that rabbit hole.
Aaron Preece:
Same. I know whenever I first saw that, it seems like there's been a lot of discussion on socialization differences, gender differences and how often people are socializing and stuff in the wider, just like the wider world right now. I don't know what the hard data really says on that and how accurate it is, but I know I've seen articles and stuff on that topic. So that's where my mind went when I saw that.
Arielle Silverman:
I think certainly, if nothing else, what it means is that people with and without disability. Disability isn't the only factor influencing how people learn. And there actually were no real big differences between people with and without disabilities in their preferences.
Tony Stephens:
Yeah. Fascinating. Wow. This has been an amazing set of data and research you all have been sort of presenting to us on the podcast today. What do we do with all this? What's the next step for you all? Where do we go with this data and how can it influence people? Or what can this be done to help guide the hands of those in charge of all this stuff?
Angie Whistler:
Yeah. So this is Angie. By and large, the wishes and the future aspirations that we found participants wanting fit into a couple different categories, which was clarity, communication, consistency, customization, and most of all caution. So for so many participants, improvements were about balance rather than rejection or letting AI out into this kind of free for all rodeo, right? Especially for disabled participants, AI improvements need customization. We need adaptability. We need clarity on what is being sacrificed, communication on how it works, and it's foundational to usability for so many people rather than just optional enhancements. AI for a large number of participants represents this boon, a center hearth for advancement that can open pathways for progress that we haven't even considered, but that has to come in a safe environment. There are aspirations pointed towards this future that isn't about novelty and is about refinement. We mentioned that kind of at the beginning of the podcast that, especially for disabled participants, AI isn't used as, or isn't usually used as a fun tool that you try out on the side, but as a matter of a tool that is opening up doors we never thought possible.
They are grounded in everyday reality, systems that listen better, adapt to diverse users, adjust for accents that maybe they didn't expect in the beginning and explain themselves clearly, clearly respect the boundaries that people want in terms of privacy. And most of all, across the board, regardless of any markers, people wanted accountability when the stakes get high.
In this study, we found that AI is most useful when it helps people get oriented, stable, think through options or release pressure on valves that we didn't think of before, not when it replaces human relationships, judgment, jobs, or comes at the sacrifice of things bigger than it. They don't want AI to do everything. Participants repeatedly emphasized that the value of AI comes in it knowing its role, in it performing that role well and does so without compromising protective policies or accessibility, right? The nature of AI is a learned language model, and that comes with the fact that it learns what you put into its own little world, right? But people want guardrails. If it doesn't have those guardrails to maintain its role that they want to do well, then it ... Well, no pun intended, but it goes off the rails. AI is going to continue to get smarter.
It's going to get better, and especially that's going to happen if we put safety barriers around it, if we start implementing policies that protect users and ensure that AI maintains its progressive course rather than starts to act as almost this black hole for a lot of catchalls of societal issues, right? Without standards and regulations, participants noted that we can't teach AI what it means when we say accessible, when we mean clarity, communication, consistency, without having a bit of caution to the side. And that comes with customization features, that comes with the expansion of accessibility features, and that also comes with not sacrificing or come costing people the already joys that they have. We can't force AI into a role that it shouldn't belong to as the last decision maker when it comes to transportation, when it comes to healthcare or privacy. AI is going to work best when it works in tandem with our wants rather than deciding for us.
Oh,
Tony Stephens:
An incredible amount of data you all had to look through and truly enlightening, Angie, and everybody else on the team. Stephanie, you and the team did an amazing job with this phase two. I'll throw it to you, Stephanie, in terms to close things out. What's next in terms of your space and what you're thinking, Stephanie, and where the team is thinking?
Stephanie Enyart:
Well, I can promise you all that we're very interested in studying more. We've peeled back layers of the onion here, if you will, and that means there's more layers underneath it. There's a lot of things that excited us about this study and directions that we would like to take future research in. So we're looking for the ideas that others have and are excited about as they dive into this research with us. So I guess share your ideas and what you find useful, interesting, helpful, areas where you want us to unpack things even more because all in all, this is an evolving story, not only for our own research and education work here at AFB, but for all of us to understand how AI can be a really space of great promise and what kinds of issues we need to begin to put some of those guardrails around to ensure that we all have access to use it in the most powerful way possible.
So more to come.
Tony Stephens:
For you and the entire team, thank you so much for taking time to join us today. Folks can check out this report now on the AFB website at afb.org/airesearch. I had that right, right, team? Did I get that right? Pretty sure it's right.
Arielle Silverman:
Yes.
Tony Stephens:
Yep. We've got it down in the links for the podcast as well. Be sure to check that out and also check out Access World with our past podcast, but also all the past episodes at ab.org/aw. We can get 26 years of Access World Magazine right there on your desktop or your mobile phone wherever you are. Thanks again, everybody for joining us today, and we will talk to you next time on Access World Podcast. Take care. You've been listening to Access World, a podcast on digital inclusion and accessibility. Access World is a production of the American Foundation for the Blind, produced at the Pickle Factory in Baltimore, Maryland. Our theme music is by Cosmonkey, compliments of artless.io. To email our hosts, Aaron and Tony, email communications@afb.org. To learn more about the American Foundation for the Blind, or even help support our work, go to www.afb.org.