

Photos for importing photos from your library.Camera for translating text via the camera.Phrasebook: Star and save translated words and phrases for future reference.Handwriting: Draw text characters instead of typing.

Conversations: Translate bilingual conversations on the fly.Photos: Translate text in taken or imported photos.Instant camera translation: Translate text in images instantly by just pointing your camera.Offline: Translate with no internet connection.Text: Translate between languages by typing.Learn more about SeamlessM4T on our AI blog.Translate between up to 133 languages. In the future, we want to explore how this foundational model can enable new communication capabilities - ultimately bringing us closer to a world where everyone can be understood. This is only the latest step in our ongoing effort to build AI-powered technology that helps connect people across languages. SeamlessM4T draws on findings from all of these projects to enable a multilingual and multimodal translation experience stemming from a single model, built across a wide range of spoken data sources with state-of-the-art results. And earlier this year, we revealed Massively Multilingual Speech, which provides speech recognition, language identification and speech synthesis technology across more than 1,100 languages. We also shared a demo of our Universal Speech Translator, which was the first direct speech-to-speech translation system for Hokkien, a language without a widely used writing system. Last year, we released No Language Left Behind (NLLB), a text-to-text machine translation model that supports 200 languages, and has since been integrated into Wikipedia as one of the translation providers. SeamlessM4T builds on advancements we and others have made over the years in the quest to create a universal translator. This enables people who speak different languages to communicate with each other more effectively. Compared to approaches using separate models, SeamlessM4T’s single system approach reduces errors and delays, increasing the efficiency and quality of the translation process. But we believe the work we’re announcing today is a significant step forward in this journey. We’re also releasing the metadata of SeamlessAlign, the biggest open multimodal translation dataset to date, totaling 270,000 hours of mined speech and text alignments.īuilding a universal language translator, like the fictional Babel Fish in The Hitchhiker’s Guide to the Galaxy, is challenging because existing speech-to-speech and speech-to-text systems only cover a small fraction of the world’s languages.
Offline voice translator license#
In keeping with our approach to open science, we’re publicly releasing SeamlessM4T under a research license to allow researchers and developers to build on this work.
