Singer Identity Representation Learning using Self-Supervised Techniques
Abstract: Significant strides have been made in creating voice identity representations using speech data. However, the same level of progress has not been achieved for singing voices. To bridge this gap, we suggest a framework for training singer identity encoders to extract representations suitable for various singing-related tasks, such as singing voice similarity and synthesis. We explore different self-supervised learning techniques on a large collection of isolated vocal tracks and apply data augmentations during training to ensure that the representations are invariant to pitch and content variations. We evaluate the quality of the resulting representations on singer similarity and identification tasks across multiple datasets, with a particular emphasis on out-of-domain generalization. Our proposed framework produces high-quality embeddings that outperform both speaker verification and wav2vec 2.0 pre-trained baselines on singing voice while operating at 44.1 kHz. We release our code and trained models to facilitate further research on singing voice and related areas.
- S.Ā Wang, J.Ā Liu, Y.Ā Ren, Z.Ā Wang, C.Ā Xu, and Z.Ā Zhao, āMR-SVS: Singing voice synthesis with multi-reference encoder,ā CoRR, vol. abs/2201.03864, 2022.
- S.Ā Nercessian, āEnd-to-End Zero-Shot Voice Conversion Using a DDSP Vocoder,ā in IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 2021, pp. 1ā5.
- T.Ā Chen, S.Ā Kornblith, M.Ā Norouzi, and G.Ā Hinton, āA simple framework for contrastive learning of visual representations,ā in International Conference on Machine Learning, 2020, pp. 1597ā1607.
- F.Ā Wang and H.Ā Liu, āUnderstanding the Behaviour of Contrastive Loss,ā in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Ā Ā Ā Nashville, TN, USA: IEEE, Jun. 2021, pp. 2495ā2504.
- A.Ā Bardes, J.Ā Ponce, and Y.Ā LeCun, āVICReg: Variance-invariance-covariance regularization for self-supervised learning,ā in ICLR, 2022.
- J.-B. Grill, F.Ā Strub, F.Ā AltchĆ©, C.Ā Tallec, P.Ā H. Richemond, E.Ā Buchatskaya, C.Ā Doersch, B.Ā Ć. Pires, Z.Ā Guo, M.Ā G. Azar, B.Ā Piot, K.Ā Kavukcuoglu, R.Ā Munos, and M.Ā Valko, āBootstrap your own latent - A new approach to self-supervised learning,ā in NeurIPS, 2020.
- S.Ā Ternstrƶm, āHi-Fi voice: Observations on the distribution of energy in the singing voice spectrum above 5 kHz,ā in Acousticsā 08, Paris, France, Jun 29-Jul 4, 2008, 2008, pp. 3171ā3176.
- B.Ā B. Monson, E.Ā J. Hunter, A.Ā J. Lotto, and B.Ā H. Story, āThe perceptual significance of high-frequency energy in the human voice,ā Frontiers in psychology, vol.Ā 5, p. 587, 2014.
- Md.Ā Sahidullah, S.Ā Chakroborty, and G.Ā Saha, āOn the use of perceptual Line Spectral pairs Frequencies and higher-order residual moments for Speaker Identification,ā IJBM, vol.Ā 2, no.Ā 4, p. 358, 2010.
- L.Ā Regnier and G.Ā Peeters, āSinger verification: Singer model .vs. song model,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā Kyoto, Japan: IEEE, 2012, pp. 437ā440.
- T.Ā Nakano, K.Ā Yoshii, and M.Ā Goto, āVocal timbre analysis using latent Dirichlet allocation and cross-gender vocal timbre similarity,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, 2014, pp. 5202ā5206.
- A.Ā Mesaros, T.Ā Virtanen, and A.Ā Klapuri, āSinger identification in polyphonic music using vocal separation and pattern recognition methods.ā in Proc. of the 8th International Society for Music Information Retrieval Conference (ISMIR), 2007, pp. 375ā378.
- M.Ā Lagrange, A.Ā Ozerov, and E.Ā Vincent, āRobust singer identification in polyphonic music using melody enhancement and uncertainty-based learning,ā in Proc. of the 13th International Society for Music Information Retrieval Conference (ISMIR), 2012.
- B.Ā Sharma, R.Ā K. Das, and H.Ā Li, āOn the Importance of Audio-Source Separation for Singer Identification in Polyphonic Music,ā in Interspeech 2019.Ā Ā Ā ISCA, Sep. 2019, pp. 2020ā2024.
- T.-H. Hsieh, K.-H. Cheng, Z.-C. Fan, Y.-C. Yang, and Y.-H. Yang, āAddressing the confounds of accompaniments in singer identification,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020, pp. 1ā5.
- N.Ā Dehak, P.Ā J. Kenny, R.Ā Dehak, P.Ā Dumouchel, and P.Ā Ouellet, āFront-end factor analysis for speaker verification,ā IEEE Transactions on Audio, Speech, and Language Processing, vol.Ā 19, no.Ā 4, pp. 788ā798, 2010.
- D.Ā Snyder, D.Ā Garcia-Romero, G.Ā Sell, D.Ā Povey, and S.Ā Khudanpur, āX-Vectors: Robust DNN Embeddings for Speaker Recognition,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, Apr. 2018, pp. 5329ā5333.
- J.-w. Jung, Y.Ā J. Kim, H.-S. Heo, B.-J. Lee, Y.Ā Kwon, and J.Ā S. Chung, āPushing the limits of raw waveform speaker recognition,ā in Interspeech.Ā Ā Ā ISCA, 2022, pp. 2228ā2232.
- M.Ā Sang, H.Ā Li, F.Ā Liu, A.Ā O. Arnold, and L.Ā Wan, āSelf-supervised speaker verification with simple siamese network and self-supervised regularization,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, 2022, pp. 6127ā6131.
- W.Ā Xia, C.Ā Zhang, C.Ā Weng, M.Ā Yu, and D.Ā Yu, āSelf-supervised text-independent speaker verification using prototypical momentum contrastive learning,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, 2021, pp. 6723ā6727.
- T.Ā Lepage and R.Ā Dehak, āLabel-efficient self-supervised speaker verification with information maximization and contrastive learning,ā in Proc. Interspeech 2022.Ā Ā Ā ISCA, Sep. 2022, pp. 4018ā4022.
- K.Ā He, H.Ā Fan, Y.Ā Wu, S.Ā Xie, and R.Ā Girshick, āMomentum contrast for unsupervised visual representation learning,ā in Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9729ā9738.
- A.Ā vanĀ den Oord, Y.Ā Li, and O.Ā Vinyals, āRepresentation learning with contrastive predictive coding,ā arXiv preprint arXiv:1807.03748, 2018.
- A.Ā Baevski, Y.Ā Zhou, A.Ā Mohamed, and M.Ā Auli, āWav2vec 2.0: A framework for self-supervised learning of speech representations,ā Advances in neural information processing systems, vol.Ā 33, pp. 12ā449ā12ā460, 2020.
- W.-N. Hsu, B.Ā Bolte, Y.-H.Ā H. Tsai, K.Ā Lakhotia, R.Ā Salakhutdinov, and A.Ā Mohamed, āHubert: Self-supervised speech representation learning by masked prediction of hidden units,ā IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol.Ā 29, pp. 3451ā3460, 2021.
- S.Ā Chen, C.Ā Wang, Z.Ā Chen, Y.Ā Wu, S.Ā Liu, Z.Ā Chen, J.Ā Li, N.Ā Kanda, T.Ā Yoshioka, X.Ā Xiao etĀ al., āWavlm: Large-scale self-supervised pre-training for full stack speech processing,ā IEEE Journal of Selected Topics in Signal Processing, vol.Ā 16, no.Ā 6, pp. 1505ā1518, 2022.
- A.Ā Saeed, D.Ā Grangier, and N.Ā Zeghidour, āContrastive learning of general-purpose audio representations,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, 2021, pp. 3875ā3879.
- H.Ā Al-Tahan and Y.Ā Mohsenzadeh, āClar: Contrastive learning of auditory representations,ā in International Conference on Artificial Intelligence and Statistics, 2021, pp. 2530ā2538.
- J.Ā Spijkervet and J.Ā A. Burgoyne, āContrastive learning of musical representations,ā in Proc. of the 22nd International Society for Music Information Retrieval Conference (ISMIR), 2021, pp. 673ā681.
- C.-i. Wang and G.Ā Tzanetakis, āSinging Style Investigation by Residual Siamese Convolutional Neural Networks,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 2018, pp. 116ā120.
- H.Ā Yakura, K.Ā Watanabe, and M.Ā Goto, āSelf-Supervised Contrastive Learning for Singing Voices,ā IEEE/ACM Trans. Audio Speech Lang. Process., vol.Ā 30, pp. 1614ā1623, 2022.
- B.Ā Desplanques, J.Ā Thienpondt, and K.Ā Demuynck, āECAPA-TDNN: Emphasized channel attention, propagation and aggregation in TDNN based speaker verification,ā in Interspeech.Ā Ā Ā ISCA, 2020, pp. 3830ā3834.
- J.Ā S. Chung, J.Ā Huh, S.Ā Mun, M.Ā Lee, H.Ā S. Heo, S.Ā Choe, C.Ā Ham, S.Ā Jung, B.-J. Lee, and I.Ā Han, āIn defence of metric learning for speaker recognition,ā in Interspeech 2020, Oct. 2020, pp. 2977ā2981.
- M.Ā Tan and Q.Ā Le, āEfficientnet: Rethinking model scaling for convolutional neural networks,ā in International Conference on Machine Learning, 2019, pp. 6105ā6114.
- C.-H. Yeh, C.-Y. Hong, Y.-C. Hsu, T.-L. Liu, Y.Ā Chen, and Y.Ā LeCun, āDecoupled contrastive learning,ā in ECCV (26), ser. Lecture Notes in Computer Science, vol. 13686.Ā Ā Ā Springer, 2022, pp. 668ā684.
- J.Ā Yamagishi, C.Ā Veaux, and K.Ā MacDonald, āCSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit (version 0.92),ā 2019.
- Z.Ā Duan, H.Ā Fang, B.Ā Li, K.Ā C. Sim, and Y.Ā Wang, āThe NUS sung and spoken lyrics corpus: A quantitative comparison of singing and speech,ā in APSIPA.Ā Ā Ā IEEE, 2013, pp. 1ā9.
- J.Ā Wilkins, P.Ā Seetharaman, A.Ā Wahl, and B.Ā Pardo, āVocalSet: A Singing Voice Dataset.ā in Proc. of the 19th International Society for Music Information Retrieval Conference (ISMIR), 2018, pp. 468ā474.
- L.Ā Zhang, R.Ā Li, S.Ā Wang, L.Ā Deng, J.Ā Liu, Y.Ā Ren, J.Ā He, R.Ā Huang, J.Ā Zhu, X.Ā Chen etĀ al., āM4Singer: A multi-style, multi-singer and musical score provided mandarin singing corpus,ā Advances in Neural Information Processing Systems, vol.Ā 35, pp. 6914ā6926, 2022.
- S.Ā Lattner, āSampleMatch: Drum sample retrieval by musical context,ā in Proc. of the 23rd International Society for Music Information Retrieval Conference (ISMIR), 2022, pp. 781ā788.
- D.Ā Niizumi, D.Ā Takeuchi, Y.Ā Ohishi, N.Ā Harada, and K.Ā Kashino, āBYOL for audio: Exploring pre-trained general-purpose audio representations,ā IEEE ACM Trans. Audio Speech Lang. Process., vol.Ā 31, pp. 137ā151, 2023.
- S.-W. Yang, P.-H. Chi, Y.-S. Chuang, C.-I.Ā J. Lai, K.Ā Lakhotia, Y.Ā Y. Lin, A.Ā T. Liu, J.Ā Shi, X.Ā Chang, G.-T. Lin, T.-H. Huang, W.-C. Tseng, K.-t. Lee, D.-R. Liu, Z.Ā Huang, S.Ā Dong, S.-W. Li, S.Ā Watanabe, A.Ā Mohamed, and H.-y. Lee, āSUPERB: Speech processing universal PERformance benchmark,ā in Interspeech.Ā Ā Ā ISCA, 2021, pp. 1194ā1198.
- L.Ā Wan, Q.Ā Wang, A.Ā Papir, and I.Ā L. Moreno, āGeneralized end-to-end loss for speaker verification,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, 2018, pp. 4879ā4883.
- Y.Ā Kwon, H.-S. Heo, B.-J. Lee, and J.Ā S. Chung, āThe ins and outs of speaker recognition: Lessons from VoxSRC 2020,ā in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Ā Ā Ā IEEE, 2021, pp. 5809ā5813.
- A.Ā Conneau, A.Ā Baevski, R.Ā Collobert, A.Ā Mohamed, and M.Ā Auli, āUnsupervised cross-lingual representation learning for speech recognition,ā in Interspeech.Ā Ā Ā ISCA, 2021, pp. 2426ā2430.
- H.-S. Choi, J.Ā Lee, W.Ā Kim, J.Ā Lee, H.Ā Heo, and K.Ā Lee, āNeural analysis and synthesis: Reconstructing speech from self-supervised representations,ā Advances in Neural Information Processing Systems, vol.Ā 34, pp. 16ā251ā16ā265, 2021.
- P.Ā Boersma and D.Ā Weenink, āPraat: Doing phonetics by computer (Version 5.1.13),ā 2009.
- Y.Ā Jadoul, B.Ā Thompson, and B.Ā de Boer, āIntroducing Parselmouth: A Python interface to Praat,ā Journal of Phonetics, vol.Ā 71, pp. 1ā15, Nov. 2018.
- Z.Ā Fan, M.Ā Li, S.Ā Zhou, and B.Ā Xu, āExploring wav2vec 2.0 on Speaker Verification and Language Identification,ā in Interspeech 2021.Ā Ā Ā ISCA, Aug. 2021, pp. 1509ā1513.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.