
Google has recently released several new Gemini features, including the Gemini 2.5 models. But that’s not all the Mountain View-based tech giant has in its roadmap. Recently, in a live demonstration given at a TedTalk, the company teased its artificial intelligence (AI) Glasses and several of its capabilities. Separately, the company also hinted at new Gemini features that could be released in the near future, as per a report. The new features largely revolve around expanding the utility and user experience of the two-way real-time voice conversation feature, Gemini Live.
Google AI Glasses, New Gemini Features Teased
In a TedTalk video, Shahram Izadi, the Vice President and General Manager of Android XR at Google presented a live demo of AI Glasses, a new product likely in the company’s roadmap. The wearable device might be namesake of the 2013 prototype that did not make it to the production stage, but the tech giant is now adding Gemini’s capabilities to make it more functional.
Google first hinted at its extended reality (XR) glasses in December 2024 while introducing Android XR. “Created in collaboration with Samsung, Android XR combines years of investment in AI, AR and VR to bring helpful experiences to headsets and glasses,” the company had said.
In the latest demo, Izadi presented glasses that look like typical prescription glasses, but are equipped with camera sensors and speakers. The glasses also feature a screen where Gemini appears and interacts with the user. In the demo, Google showcased that the AI chatbot can see what the user sees, and respond to the queries in real time. For instance, Gemini could look at the crowd and instantly recite a haiku based on the expression of the people.
Izadi also showcased memory feature in the AI Glasses. This was first unveiled last year as part of Project Astra. Gemini can essentially remember objects and visual information it “sees” even when the said object has left the field-of-view of the user and the camera. Google said Gemini’s memory can be extended to up to 10 minutes.
Separately, in an interview with CBS’ 60 Minutes, Google DeepMind CEO Demis Hassabis also hinted that the memory feature could be expanded to Gemini Live soon. While Gemini Live with Video can see video feed from the user’s device, currently, it cannot remember things. Additionally, Google’s AI Glasses are said to do more than just answering questions, and to also be able to perform tasks such as purchasing a product online.
Further, Hassabis reportedly also highlighted that Gemini Live could also say a greeting message when a user turns the feature on.