Recently, Google hosted its yearly Google Developers Conference, Google I/O, where countless updates to Google products were announced, with many now being available to the public. Gemini, Google’s suite of AI technologies, was a highlight of the show and dominated Google’s announcements regarding new innovations. For this article, I wanted to gather together the announced or updated technologies that have the potential to be most impactful in terms of accessibility. As mentioned, this article focuses on AI-related updates, as that was the focus of the main Google presentation. Since this blog serves as a companion to our Apple WWDC piece, which covers their operating systems, those seeking information on the latest additions and updates to the Android operating system should check out this article covering updates announced at the Android show, which was hosted prior to Google I/O proper. If you would like to watch the Google keynote yourself, Google has provided an audio-described version. Major kudos to Google, as there were many demonstrations and a great deal of visual content during the presentation, and the audio description was excellent, providing a much richer experience. I highly recommend checking it out.
Gemini Understanding the World
Google is working to make Gemini a universal AI assistant so it can help you with multiple tasks seamlessly. To do this, they are developing ways for Gemini to simulate and understand the real world, and multiple projects are making major strides in this area. The most intriguing to me were Gemini robots?Gemini models designed to control and direct robots. To do this accurately, Gemini needs to understand the world around it so it can direct the robot properly, whether through navigation or manipulation of a robot arm. By being able to understand the world well enough to manipulate or navigate through it, this type of world modeling could be massively helpful for those who have low vision, as the AI will become more and more capable of assisting with complex tasks.
Already, Google has partnered with Aira to bring some of these features together into an AI assistant specifically aimed at helping people who are blind or have vision loss with daily tasks. It's heartening to see focused attention on how AI can assist the blind and low vision community.
As this technology evolves, I can envision Gemini growing into a highly capable personal assistant?not just providing spoken feedback during tasks, but offering contextual help that feels increasingly intuitive, like having a sighted guide. In time, Gemini could also extend its usefulness through robotic integration, performing delicate tasks that require visual precision beyond what tactile feedback or voice guidance alone can handle. While it’s hard to pin down specific examples, since many of these use cases would be highly specialized, I could see applications in areas like intricate artistry, fine craftsmanship, or even precision engineering, where vision and steadiness are critical to success.
Agent Mode
A feature that complements the above and supports Gemini as an accessibility assistant is Agent Mode, which allows you to state a task and ask Gemini to perform it. It understands what you want done and how to accomplish it, including browsing the web, performing research, and interacting directly with your Google apps and content. For example, one of the demos in the keynote shows Gemini locating bike parts online from a local shop, then contacting the shop directly to purchase those parts and reporting back. This seems to synergize well with the world-modeling and deeper understanding capabilities being built into Gemini, bringing it closer to the kind of assistance a person might provide.
Android XR
Android XR is Google’s operating system that will run on virtual reality headsets and smart glasses. Giving AI direct access to your vision is one of the more intuitive and potentially powerful uses of the technology, making assistance faster and more seamless due to the immersive nature of the experience. XR virtual reality headsets have the potential to be quite accessible, as their soundscapes are often designed to be realistic and spatial, which benefits users by default.
The integration of Gemini into these headsets could bring its visual understanding and assistance capabilities to blind or low vision users, significantly improving navigation and comprehension in complex virtual environments. This is especially promising given that Gemini could have access to the underlying structure of virtual worlds, allowing it to be even more precise than it might be in the real world.
On smart glasses, XR becomes especially useful simply by giving Gemini access to your actual field of view. Beyond visual assistance, the glasses can also take advantage of many of Gemini’s other features, which may be helpful for those new to technology?for example, completing tasks via spoken input instead of a screen reader.
The Bottom Line
Overall, Gemini is getting smarter and more capable at a breakneck pace, with many of these innovative features already available today. If you want to see the full list of announcements and all that is coming?and has already arrived?for Gemini, check out this post where Google itemizes all the updates for Google I/O 2025.