
Google has announced SignGemma, a new artificial intelligence (AI) model that can translate sign language into spoken text. The model, which will be part of the Gemma series of models, is currently being tested by the Mountain View-based tech giant and is expected to be launched later this year. Similar to all the other Gemma models, SignGemma will also be an open-source AI model, available to individuals and businesses. It was first showcased during the Google I/O 2025 keynote, and it is designed to help people with speech and hearing disabilities effectively communicate with even those who do not understand sign language.
SignGemma Can Track Hand Movements and Facial Expressions
In a post on X (formerly known as Twitter), the official handle of Google DeepMind shared a demo of the AI model and some details about its release date. However, this is not the first time we have seen SignGemma. It was also briefly showcased at the Google I/O event by Gus Martin, Gemma Product Manager at DeepMind.
We’re thrilled to announce SignGemma, our most capable model for translating sign language into spoken text. 🧏
This open model is coming to the Gemma model family later this year, opening up new possibilities for inclusive tech.
Share your feedback and interest in early… pic.twitter.com/NhL9G5Y8tA
— Google DeepMind (@GoogleDeepMind) May 27, 2025
During the showcase, Martins highlighted that the AI model is capable of providing text translation from sign language in real-time, making face-to-face communication seamless. The model was also trained on the datasets of different styles of sign languages, however, it performs the best with the American Sign Language (ASL) when translating it into the English language.
According to MultiLingual, since it is an open-source model, SignGemma can function without needing to connect to the Internet. This makes it suitable to use in areas with limited connectivity. It is said to be built on the Gemini Nano framework and uses a vision transformer to track and analyse hand movements, shapes, and facial expressions. Beyond making it available to developers, Google could integrate the model into its existing AI tools, such as Gemini Live.
Calling it “our most capable model for translating sign language into spoken text,” DeepMind highlighted that it will be released later this year. The accessibility-focused large language model is currently in its early testing phase, and the tech giant has published an interest form to invite individuals to try it out and provide feedback.