An Overview of the IberSpeech-RTVE 2022 Challenges on Speech TechnologiesEduardo Lleida, Luis Javier Rodriguez-Fuentes, Javier Tejedor, Alfonso Ortega, Antonio Miguel, Virginia Bazán, Carmen Pérez, Alberto de Prada, Mikel Penagarikano, Amparo Varona , Germán Bordel, Doroteo Torre-Toledano, Aitor Álvarez and Haritz Arzelus

ViVoLab, Aragón Institute for Engineering Research (I3A), University of Zaragoza, 50018 Zaragoza, Spain

Department of Electricity and Electronics, Faculty of Science and Technology, University of the Basque Country (UPV/EHU), Barrio Sarriena, 48940 Leioa, Spain; luisjavier.rodriguez@ehu.eus (L.J.R.-F.); mikel.penagarikano@ehu.eus (M.P.); amparo.varona@ehu.eus (A.V.); german.bordel@ehu.eus (G.B.)

Institute of Technology, Universidad San Pablo-CEU, CEU Universities, Urbanización Montepríncipe, 28668 Boadilla del Monte, Spain; javier.tejedornoguerales @ceu.es

Corporación Radiotelevisión Española, 28223 Madrid, Spain; virginia.bazan@rtve.es (V.B.); carmen.perez.cernuda@rtve.es (C.P.); alberto.deprada@rtve.es (A.d.P.)

AUDIAS, Electronic and Communication Technology Department, Escuela Politécnica Superior, Universidad Autónoma de Madrid, Av. Francisco Tomás y Valiente, 11, 28049 Madrid, Spain; doroteo.torre@ uam.es

Fundación Vicomtech, Basque Research and Technology Alliance (BRTA), Mikeletegi 57, 20009 Donostia-San Sebastián, Spain; aalvarez@vicomtech.org (A.Á.); harzelus@vicomtech.org (H.A.)


Contact: lleida@unizar.es


DOI: https://doi.org/10.3390/app13158577

Evaluation campaigns provide a common framework with which the progress of speech
technologies can be effectively measured. The aim of this paper is to present a detailed overview of the IberSpeech-RTVE 2022 Challenges, which were organized as part of the IberSpeech 2022 conference under the ongoing series of Albayzin evaluation campaigns. In the 2022 edition, four challenges were launched:

  1.  speech-to-text transcription;
  2.  speaker diarization and identity assignment;
  3.  text and speech alignment; and
  4.  search on speech.

Different databases that cover different domains (e.g., broadcast news, conference talks, parliament sessions) were released for those challenges. The submitted systems also cover a wide range of speech processing methods, which include hidden Markov model-based approaches, end-to-end neural network-based methods, hybrid approaches, etc.
This paper describes the databases, the tasks and the performance metrics used in the four challenges. It also provides the most relevant features of the submitted systems and briefly presents and discusses the obtained results. Despite employing state-of-the-art technology, the relatively poor performance attained in some of the challenges reveals that there is still room for improvement. This encourages us to carry on with the Albayzin evaluation campaigns in the coming years.


Read the PDF

Partagez :