Logo ESPERANTO
ESPERANTO

Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using AdaptersPeng Junyi, Themos Stafylakis, Rongzhi Gu, Oldřich Plchot, Ladislav Mošner, Lukáš Burget and Jan Černocký

Brno University of Technology, Faculty of Information Technology, Speech@FIT, Czechia
Omilia - Conversational Intelligence, Athens, Greece

Tencent AI Lab, Shenzhen, China

 

 

 

DOI. (10.1109/ICASSP49357.2023.10094795)

Recently, the pre-trained Transformer models have received a rising interest in the field of speech processing thanks to their great success in various downstream tasks. However, most fine-tuning approaches update all the parameters of the pre-trained model, which becomes prohibitive as the model size grows and sometimes results in over-fitting on small datasets.

In this paper, we conduct a comprehensive analysis of applying parameter-efficient transfer learning (PETL) methods to reduce the required learnable parameters for adapting to speaker verification tasks. Specifically, during the fine-tuning process, the pre-trained models are frozen, and only lightweight modules inserted in each Transformer block are trainable (a method known as adapters). Moreover, to boost the performance in a cross-language low-resource scenario, the Transformer model is further tuned on a large intermediate dataset before directly fine-tuning it on a small dataset. With updating fewer than 4% of parameters, (our proposed) PETL-based methods achieve comparable performances with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%, Vox1-H:1.73%).

 

 

 

Read the PDF 

Partagez :