Direct Text to Speech Translation System Using Acoustic UnitsMingote Victoria , Pablo Gimeno, Luis Vicente, Sameer Khurana, Antoine Laurent and Jarod Duret

ViVoLab, Aragón Institute for Engineering Research (I3A), University of Zaragoza, 50018 Zaragoza, Spain

MIT Computer Science and Artificial Intelligence Laboratory, Cambridge, MA, US

LIUM - Le Mans University, France

 LIA - Avignon University, France




Contact: (vmingote, pablogj, lvicente)@unizar.es


DOI: 10.1109/LSP.2023.3313513

This letter proposes a direct text to speech translation system using discrete acoustic units. This framework employs text in different source languages as input to generate speech in the target language without the need for text transcriptions in this language.

Motivated by the success of acoustic units in previous works for direct speech to speech translation systems, we use the same pipeline to extract the acoustic units using a speech encoder combined with a clustering algorithm. Once units are obtained, an encoder-decoder architecture is trained to predict them. Then a vocoder generates speech from units. Our approach for direct text to speech translation was tested on the new CVSS corpus with two different text mBART models employed as initialisation.

The systems presented report competitive performance for most of the language pairs evaluated. Besides, results show a remarkable improvement when initialising our proposed architecture with a model pre-trained with more languages.




Read the PDF

Partagez :