Training Speaker Embedding Extractors Using Multi-Speaker Audio with Unknown Speaker BoundarieThemos Stafylakis, Ladislav Mosner, Oldrich Plchot, Johan Rohdin, Anna Silnova, Jan Cernocky

Omilia - Conversational Intelligence, Athens, Greece
Brno University of Technology, Speech@FIT and IT4I Center of Excellence, Brno, Czechia


In this paper, we demonstrate a method for training speaker embedding extractors using weak annotation. More specifically, we are using the full VoxCeleb recordings and the name of the celebrities appearing on each video without knowledge of the time intervals the celebrities appear in the video.

We show that by combining a baseline speaker diarization algorithm that requires no training or parameter tuning, a modified loss with aggregation over segments, and a two-stage training approach, we are able to train a competitive ResNet-based embedding extractor. Finally, we experiment with two different aggregation functions and analyze their behaviour in terms of their gradients.

Partagez :