An Attention-Based Backend Allowing Efficient Fine-Tuning of Transformer Models for Speaker Verification,Peng Junyi, Oldřich Plchot, Themos Stafylakis, Ladislav Mošner, Lukáš Burget and Jan Černocký

Brno University of Technology, Faculty of Information Technology, Speech@FIT, Czechia
Omilia - Conversational Intelligence, Athens, Greece




DOI. (10.1109/SLT54892.2023.10022775)

In recent years, self-supervised learning paradigm has received extensive attention due to its great success in various down-stream tasks. However, the fine-tuning strategies for adapting those pre-trained models to speaker verification task have yet to be fully explored.

In this paper, we analyze several feature extraction approaches built on top of a pre-trained model, as well as regularization and a learning rate scheduler to stabilize the fine-tuning process and further boost performance: multi-head factorized attentive pooling is proposed to factorize the comparison of speaker representations into multiple phonetic clusters. We regularize towards the parameters of the pre-trained model and we set different learning rates for each layer of the pre-trained model during fine-tuning. The experimental results show our method can significantly shorten the training time to 4 hours and achieve SOTA performance: 0.59%, 0.79% and 1.77% EER on Vox1-O, Vox1-E and Vox1-H, respectively. 1 1 Code is available at https://github.com/JunyiPeng00/IEEE-SLT22-Pretrained-Model-for-SV.




Read the PDF 

Partagez :