At its I/O 2025 conference, Google announced SignGemma, a new AI model designed to translate sign language into spoken text in real time.
Built for on-device use, SignGemma is part of Google’s open-source Gemma family of lightweight models designed to run efficiently on local hardware.
Although designed to be massively multilingual, SignGemma currently performs best on American Sign Language (ASL) to English translation tasks.
While still in its early testing phase, SignGemma is expected to be publicly available by the end of 2025. Google has opened an interest form for those interested in trying it and providing feedback.
In a post on X, Google highlighted SignGemma’s potential to advance inclusive tech and shared a short demo of the model.
Gus Martins, Gemma Product Manager at Google DeepMind, called SignGemma “the most capable sign language understanding model ever.”
During the developer keynote, Martins encouraged developers and Deaf and Hard-of-Hearing community members to build on the foundation model.
Commenting on the announcement, Sally Chalk, CEO of UK-based sign language AI company Signapse, welcomed the development but emphasized the need for Deaf community involvement. “It’s important to ensure that technology that hopes to serve the Deaf community is developed with the Deaf community,” she told Slator.
Chalk added that progress in this space is accelerating, with “exciting developments happening on an almost daily basis.”