Logo ESPERANTO
ESPERANTO

JSALT 2022 1st presential meetingFirst preparation meeting in Le Mans

In Order to prepare the participation to the next Jsalt workshop in JHU, Baltimore, the LIUM welcomed the ESPERANTO team in Le Mans for two days of intensive brainstorming around their project "Multi-lingual Speech to Speech Translation for Under-Resourced Languages" .

 

For this first event, many partners were present:

  • Santosh Kesiraju, Renata Kolohva and Marek Sarvas from Brno University of Technology - Czech Republic
  • Pablo Gimeno and Luis Vicente from Universidad Zaragoza - Spain
  • Themos Stafylakis from OMILIA - Greece
  • Peter Vickers from University of Sheffield - United Kingdom
  • Guillaume Bernard and Olivier Galibert from LNE - France
  • Jarod Duret, Yannick Estève, Gaelle Laperriere, Mickael Rouvier and Natalia Tomashenko from Avignon Université - France
  • Cécile Macaire from Université Grenoble Alpes - France
  • Loïc Barrault, Emmanuelle Billard, Thibault Gaudier, Anthony Larcher, Antoine Laurent, Valentin Pelloin and Thomas Thebaud from Le Mans Université
  • Laurent Besacier from Naverlabs also joined the team on this occasion.

 

 

In order for the ESPERANTO members to arrive fully prepared in the future Jsalt workshop, the different tasks were divided between the sub-groups and refined during these two days.

This first meeting also allowed to create the cohesion of the team around social events such as the discovery of the historic quarter - Cité Plantagenêt  offered by Le Mans Métropole.

 

Next meeting will take place in Avignon, on 2-3 of June.

 

 

 

Abstract of the project:

 

Seamless communication between people speaking different languages is a long term dream of humanity. Artificial intelligence aims at reaching this goal. Despite recent huge improvements made for Machine Translation, Speech Recognition and Speech Translation, Speech to Speech Translation (SST) remains a central problem in natural language processing, especially for under-resourced languages. A solution to this problem is to gather and share information across modalities and large resource languages to create a common multi-modal multi-lingual representation space that could then be used to process under-resourced one through transfer learning, as depicted in Figure 1.

Full proposal

 

 

 

 

 

Partagez :