Dear AccessWorld readers,

Welcome to the fall edition of AccessWorld!

Before we jump into this month’s issue, I want to make you aware of a research opportunity currently available through AFB. Our Public Policy and Research Institute is continuing its work on the impact of AI on people who are blind or have low vision. This study explores how AI use differs between people with and without disabilities.

We are conducting a survey of both groups to learn how people use AI, what they use it for, what they think of it, and any problems they may have encountered with AI systems. This survey is open to everyone—we want to understand whether there are differences in these experiences between people with disabilities and those without. If you are not blind or have no vision impairment, we still want to hear from you. And if you are blind or have low vision, We want to hear from you too! That being said, if you know sighted individuals who may be interested, please share the survey with them as well. You can find more information and sign up at this link.

In this quarter’s issue, we feature an article I’ve been eagerly anticipating for many months. Steve Kelley brings us a review of the Meta AI smart glasses, detailing how they function for someone with low vision. I’ve long thought these glasses could be a major game-changer in accessibility, and I was excited for the opportunity to have them reviewed. Since Steve’s article focuses on low vision use, I’d like to briefly share my own perspective as someone who is blind and has been using the glasses for several months.

If you’re unfamiliar with Meta’s smart glasses, they connect to your smartphone and interface with one of Meta’s multimodal large language models. They allow you to take photos or videos, listen to music, and ask the glasses a wide variety of questions. Though not designed specifically for blind or low-vision users, the built-in camera can describe or read things aloud, making them a potentially valuable tool.

A recently added feature called Live Video allows for continuous interaction with the AI while it processes video streamed directly from your glasses—almost like having someone looking over your shoulder. One of the biggest benefits of the Meta AI glasses, aside from the natural positioning of the camera on your face, is their low latency. With live video sessions, the response time becomes even shorter, allowing near-instant replies. I’ve found this especially helpful for tasks such as sorting groceries or clothing, where speed makes a difference.

As mentioned, the glasses are intended for the general public rather than blind or low-vision users. This means you need to be precise with your questions, since the system defaults to interpreting queries as though a sighted person is asking. For example, if you’re holding a can of soda and ask, “What’s in my hand?” the response might simply be, “A soda can,” without identifying the brand. Instead, you need to ask, “What does the label on this can say?”

Like all AI systems, the glasses can sometimes “hallucinate” or confidently give incorrect information. Over time, I’ve noticed these errors becoming less frequent, and I expect the technology to continue improving. Still, I recommend using the recognition features in situations where you already have some idea of what you’re working with—for example, identifying a soda can in your refrigerator (where you already know the options) rather than in a store. That way, if the glasses misidentify something, you’ll know to adjust for a clearer view.

I’m excited to see how this technology develops further. Personally, I’ve found the ease of use, natural head-mounted camera, and low latency to be genuinely helpful in specific situations.

Be sure to check out Steve’s article for a detailed look at the device itself and its usefulness for those with low vision.

Also in this issue, Janet Ingber reviews the apps and websites for Yelp and TripAdvisor—popular review platforms that are helpful whether you’re traveling or simply considering a new business.

Next, I’ve contributed a piece on the Object Navigation features of the NVDA screen reader. Object Navigation is a very powerful tool, though often daunting for new users because of how different it is from standard navigation. This article aims to make getting started with Object Navigation more straightforward.

Finally, we close this issue with a two-part article by new author Dmitriy Lazarev, presented in full here. He traces the history of the Mortal Kombat fighting game franchise and shows how, even from the beginning—often accidentally—it included features that made it more accessible to players who are blind. He also explores how the series evolved into a deliberate model of accessibility in modern times. Importantly, his work highlights how the franchise’s increasing complexity sometimes introduced new barriers even as it pushed forward in other areas. That jagged journey ultimately led to the highly accessible experience the series offers today, illustrating how technological progress can both challenge and advance inclusion.

As always, I thank you for being a reader of AccessWorld, and I hope you enjoy our Fall 2025 issue. If you have comments or questions, feel free to email me at apreece@afb.org or leave a comment on our social platforms—Facebook, Twitter, LinkedIn, and more.

Sincerely,

Aaron Preece

Editor-in-Chief, AccessWorld

American Foundation for the Blind

Author
Aaron Preece
Article Topic
Editor's Page