diff --git "a/-tAyT4oBgHgl3EQfdfed/content/tmp_files/load_file.txt" "b/-tAyT4oBgHgl3EQfdfed/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/-tAyT4oBgHgl3EQfdfed/content/tmp_files/load_file.txt" @@ -0,0 +1,951 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf,len=950 +page_content='1 Sample-Efficient Unsupervised Domain Adaptation of Speech Recognition Systems: A case study for Modern Greek Georgios Paraskevopoulos Student Member, IEEE, Theodoros Kouzelis, Georgios Rouvalis, Athanasios Katsamanis Member, IEEE, Vassilis Katsouros Member, IEEE, Alexandros Potamianos Fellow, IEEE Abstract—Modern speech recognition systems exhibits rapid performance degradation under domain shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this work we propose M2DS2, a simple and sample-efficient finetuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We find that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For evaluation, we collect HParl, a 120 hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We merge HParl with two popular Greek corpora to create GREC-MD, a test- bed for multi-domain evaluation of Greek ASR systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In our experiments we find that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environ- ment, M2DS2 yields significant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' When we relax the problem in a weakly supervised setting, we find that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Index Terms—Unsupervised Domain Adaptation, Automatic Speech Recognition, Multi-Domain Evaluation, Greek Speech I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' INTRODUCTION Automatic Speech recognition (ASR) models have matured to the point where they can enable commercial, real-world applications, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', voice assistants, dictation systems, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', thus being one of machine learning’s success stories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' However, the performance of ASR systems rapidly deteriorates when the test data domain differs significantly from the training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Domain mismatches can be caused by differences in the recording conditions, such as environmental noise, room reverberation, speaker and accent variability, or shifts in the target vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' These issues are extenuated in the case of low-resource languages, where diversity in the training data is limited due to poor availability of high-quality transcribed audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Therefore, specialized domain adaptation approaches need to be employed when operating under domain-shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Unsupervised Domain Adaptation (UDA) methods are of special interest, as they do not rely on expensive annotation G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Paraskevopoulos is with the Graduate School of ECE, National Technical University of Athens, Athens, Greece G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Paraskevopoulos, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Kouzelis, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Rouvalis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Katsamanis, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Katsouros are with the Institute for Speech and Language Processing, Athena Research Center, Athens, Greece A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Potamianos is with the Faculty of ECE, National Technical University of Athens, Athens, Greece of domain-specific data for supervised in-domain training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In contrast to supervised approaches, where the existence of labeled data would allow to train domain-specific models, UDA methods aim to leverage data in the absense of labels to improve system performance in the domain of interest [1], [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the context of speech recognition the importance of UDA is extenuated, as the transcription and alignment pro- cess is especially expensive and time-consuming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Adaptation methods have been explored since the early days of ASR, at different levels of the system and different deployment settings [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' UDA has been used to improve the robustness of ASR on a variety of recording conditions including far- field speech, environmental noise and reverberation [4], [5], [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, UDA has been used for speaker adaptation, and to improve performance under speaker, gender and accent variability [7], [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' UDA has also been employed for multilin- gual and cross-lingual ASR, in order to improve ASR models for low-resource languages [9], adapt to different dialects [10], and even train speech recognition systems for endangered languages [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Classical speech adaptation techniques involve feature- based techniques, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', speaker normalization [12], feature- based approaches [13]–[15], or multi-condition training [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Generally, traditional approaches require some knowledge about the target domain, and the domain mismatch, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', regarding the noise and reverberation variability [17], and require specific engineering for each adaptation scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Modern ASR pipelines, increasingly rely on end-to-end neural networks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', [18], [19], or large pretrained models with self-supervised objectives [20], [21].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The key approaches employed for UDA of end-to-end ASR models can be grouped in three categories, namely, teacher-student learning [10], domain adversarial training [22], and target domain self- supervision [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The benefit of these techniques is that they do not require any special knowledge about the source or the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This makes end-to-end UDA approaches versatile and able to be utilized in a larger array of adaptation scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In particular, adaptation through self-supervision has been shown to be a robust, simple and efficient technique for adaptation of state-of-the-art speech models [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Here, we leverage in-domain self-supervision to propose the Mixed Multi-Domain Self-Supervision (M2DS2) finetun- ing strategy, enabling sample-efficient domain adaptation of wav2vec2 [20] based speech recognition models, even when available in-domain data are scarce.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Our key contributions are arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='00304v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='CL] 31 Dec 2022 2 TABLE I SUMMARY OF RELATED WORKS ON UNSUPERVISED DOMAIN ADAPTATION FOR ASR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Work Method Model Adaptation Setting Language [23],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [25],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [26] Teacher-Student Hard and soft labels Conformer RNN-T [27] Transformer CTC RNN-T [19] News speech,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Voice search,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Far-field,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Telephony,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' YouTube English [4],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [5] Teacher-Student Soft labels TDNN-LSTM [28] Noise,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Far-field English [29] Teacher-Student Hard and soft labels NiN-CNN [30] Dialects Children speech Japanese [31] Teacher-Student Soft labels Streaming RNN-T [32] Multilingual English,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Brazilian Portuguese,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Russian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Turkish,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Nordic/Germanic [6],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [33],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [34] Domain Adversarial Training TDNN Kaldi [35],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [36] DNN-HMM DNN-HMM Noise,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Channel English [37] Domain Adversarial Training RNN-CTC [38] Far-field English [8],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [39] Domain Adversarial Training TDNN Kaldi RNN-T Accent Mandarin [7],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [40] Domain Adversarial Training DNN-HMM CNN-DNN Speaker,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Gender,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Accent English [9] Domain Adversarial Training DSN [41] Multilingual Hindi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Sanskri [24],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [42] Continual Pre-Training wav2vec2 [20] Audiobooks,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Accents,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Ted Talks,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Telephony,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Crowd-sourced,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Parlamentary speech English [43] Continual Pre-Training wav2vec2 Cross-lingual Korean [11],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [44] Continual Pre-Training XLSR-53 [21] wav2vec2 Low resource languages Ainu Georgian,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Somali,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Tagalog,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Farsi organized as follows: 1) Inspired by recent advances on UDA for Natural Lan- guage Processing systems [45],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' we propose a finetuning strategy for speech models,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' where the self-supervised objective is based on a contrastive loss in Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Contrary to prior works, who leverage only in-domain self-supervision, we find that in this contrastive setting this leads to mode-collapse of the latent representations, and mixed source and target domain self-supervision is essential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We demonstrate this empirically in Sec- tion VII-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2) We collect and curate HParl, the largest publicly avail- able1 speech corpus for Greek, collected from plenary sessions in the Greek Parliament between 2018 and 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We establish a data collection, pre-processing and alignment pipeline that can be used for continuous data integration, as the parliamentary proceedings get regularly uploaded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We provide a detailed description of our data collection process and the dataset statistics in Section IV-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' HParl is merged in Section IV with two popular Greek corpora (Logotypografia and Common- Voice) to create GREC-MD, a testbed for multi-domain evaluation of ASR systems in Greek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3) We demonstrate that, while other baselines fail at UDA in our resource-constrained setting, M2DS2 can improve model performance in the target domain in multiple adaptation scenarios in Section VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Specifical emphasis is given in the sample efficiency of our approach in Sec- 1We plan to release this version of HParl under the CC BY-NC 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0 license upon publication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The other corpora used in this work are available through their respective distributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' tion VII-A, where we demonstrate successful adaptation even when we reduce the available in-domain data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4) When we relax the problem to a weakly supervised adaptation setting, where some in-domain text is avail- able but the pairing between audio and text is unknown, we find that M2DS2 can be effectively combined with simple N-gram adaptation techniques to get compara- ble performance with the fully supervised baseline in Section VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore we find that a simple text augmentation approach, based on perplexity filtering of a large corpus can produce strong adaptation results, even for small amounts of in-domain text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Additionally, we provide a formulation of the UDA problem for ASR in Section II-A and link prior works to this formu- lation in Sections II-B, II-C and II-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We provide detailed experimental settings for reproducibility in Section V, and an upper-bound estimation for UDA performance with fully supervised finetuning in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' BACKGROUND We start by formally defining the Unsupervised Domain Adaptation (UDA) problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Initially, we formulate the prob- lem in a classification setting and then we extend it for speech recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We then provide an overview of different adaptation approaches in the literature, and link each approach to the UDA problem formulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Table I presents a summary of the key adaptation settings and applications that are ex- plored in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We see, that a relatively small amount of methods, and their variants, is used to address multiple real-world ASR problems, for example, cross-lingual, accent, speaker and noise adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, while the majority 3 of the works focus on the English language, there is an effort to explore other popular languages, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Mandarin, and under- resourced languages, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Ainu, Somali etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Problem Definition Formally, the problem of UDA can be defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Let X ⊆ Rn be a real-valued space that consists of n- dimentional feature vectors x ∈ X, and Y a finite set of labels y ∈ Y , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Y = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' , L}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, assume two different distributions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', the source domain distribution S(x, y) and the target domain distribution T (x, y), defined on the cartesian product X × Y .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The goal is to train a model that learns a mapping between feature vectors xT to their respective labels yT for samples drawn from the target distribution (xT , yT ) ∼ T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' At training time we have access to samples from the source distribution S(x, y) and the marginalized target distribution T (x), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', no target labels are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We define the training dataset D as the concatenation of the source and target training sets, D = (DS, DT ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' DS and DT are defined as sequences of tuples, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', DS = {(xi, yi) | (xi, yi) ∼ S(x, y), 1 ≤ i ≤ N} DT = {(xi, ∅) | xi ∼ T (x), 1 ≤ i ≤ M}, (1) where we draw N samples from S(x, y) and M samples from T (x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Finally, we augment tuples in D with a domain indicator function: D = {(xi, y′ i, 1i) | 1 ≤ i ≤ N + M} 1i = � 0 if xi ∼ S(x), 1 if xi ∼ T (x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' y′ i = � yi if xi ∼ S(x), ∅ if xi ∼ T (x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (2) 1) Unsupervised (Acoustic) Adaptation for ASR: The above definition can be directly extended in the case of speech recognition, with some modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In detail, we modify the feature space X, to be the set of (finite) sequences of real-valued feature vectors (xk)k∈N\\{∞} ∈ X ⊆ (Rn)∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, the label space Y is modified to be the set of sequences (ym)m∈N\\{∞}, where Y = ({1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' , L})∗ contains finite-length sequences over a finite lexicon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For CTC training we make the assumption that k > m for any sample (xk, ym), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', feature sequences are longer than their respective label sequences [46].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The rest of the definitions need no modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2) Unsupervised (Language) Adaptation for ASR: Adapta- tion for ASR systems can also be performed at the language level, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', the label space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this setting, we assume that the target domain samples are drawn from the marginalized target distribution T (y).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The target dataset DT now consists of tuples in the form (∅, yi), where yi is the label word sequence (ym)m∈N\\{∞} for the i-th sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3) Weakly supervised Adaptation for ASR: The last setting we explore is the case were both audio and language in- domain samples are available, but the mapping between them is unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This situation can be encountered in real-world settings, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', in the case in-domain audio and text are collected independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For example consider the case where audio clips from news casts are collected, along with contemporary newspaper articles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Another example is the case where long audio clips alongside with transcriptions are available, but no fine-grained time alignments2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this case the target domain samples are drawn independently from the marginalized dis- tributions T (x) and T (y), and the target dataset DT consists of tuples in the form (xi, ∅) and (∅, yi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Teacher-Student Models Teacher-Student learning or self-training, is one of the earliest methods in semi-supervised learning [47]–[49].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The key idea is to reduce the problem of unsupervised learning of the task at hand in the target domain to a supervised one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The general methodology is to train a teacher model gS using the labeled data in the source domain DS, and then use this for inference on the target domain to produce pseudolabels ˆyi = gS(xi), xi ∼ T (x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The target domain dataset DT is augmented with these silver labels, to contain tuples (xi, ˆyi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Finally, a student model gT is trained in a supervised fashion, using the augmented DT or a combination of DS and DT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This process is usually repeated, with the student model serving as the teacher model for the next iteration, until no further improvement is observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' More recently, soft target Teacher-Student learning has been explored for ASR [26], [31], [50], where the KL divergence between the teacher and student output label distributions is used as the loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Being trained only on the source domain data the teacher model is susceptible to error propagation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Filtering is a com- monly used technique to achieve the right balance between the size of the target domain used for training the student model and the noise in the pseudolabels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Confidence scoring based on the likelihood is usually applied, discarding those utterances for which the hypothesized labels are untrustworthy [51].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [25] dropout is used to measure the model uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The agreement between model predictions with and without dropout are used for confidence scoring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [23] a multi-task training objective with a confidence loss is applied to minimise the binary cross entropy between the estimated confidence and the binary target sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In order to learn more robust and generalizable features from the teacher model, Noisy Student Training (NST) has been proposed in [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The teacher models generates pseudolabels for DT while the student models are trained on a heavily augmented version of DT [52].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [52], [53] the augmentation of the input target data is performed with SpecAugment [54], while in [29] a spectrum frequency augmentation is performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [4] Teacher-Student learning with soft labels is introduced for ASR to tackle noisy, far-field, and children speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In 2While a fully supervised in-domain dataset can be constructed in this case using long / forced alignment methods, this is not a focal point for the experimental part of this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4 [5], this approach is extended for LF-MMI based models and used for noisy, far-field and bandwidth adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [29] a weighted sum of hard and soft target cross entropy losses is used for Japanese dialects and children speech adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Ramabhadran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [31] propose a self-adaptive distillation, and a method for distilling from multiple teachers that is applied across several multilingual ASR systems for different language groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A comparison between soft and hard targets for RNN-T models [19] showed that soft targets perform better when both the teacher and student models have the same architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Otherwise, hard targets are superior [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Domain Adversarial Training Domain Adversarial Training (DAT) was initially introduced for image classification [55].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The key idea is to train a model that learns deep features that solve the task at hand in the source domain, while being invariant with respect to the domain shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Concretely, the model is trained end- to-end using a combination of the supervised task loss Lt, learned on DS, and the domain discrimination loss La, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', L = Lt − αLa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The loss La is binary cross-entropy, trained for domain discrimination using the tuples (xi, 1i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Notice the − sign in the loss indicates adversarial learning, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', the model should learn features that cannot discriminate between domains, while solving the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [6] DAT is employed for noise adaptation on a noise corrupted version of WSJ [56] as the target dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Using the Aurora-4 [57] dataset which has labels associated to the noise type, Serdyuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [33] train an adversarial noise classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [8] and [39] DAT is utilized for accent adaptation for Mandarin and English respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Anoop C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [9] propose DAT, to address the scarcity of data in low-resource languages which share a common acoustic space with a high-resource language, namely Sanskrit and Hindi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' They empirically demonstrate the effectiveness of adversarial training, presenting experiments with and without the reversal of the domain classification loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Leveraging In-domain Self-supervision These lines of work have roots in Natural Language Pro- cessing tasks [45], [58], and explore domain adaptation by leveraging the in-domain data DT for self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The core focus is domain adaptation of large pre-trained models, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', [59], and self-supervision is achieved by use of the pre-training self-supervised loss Ls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This process can either take part in stages, via continual pre-training [58], or by constructing a multitask objective L = Lt + αLs, as in [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Continual Pre-Training (CPT) has been explored for adap- tation of ASR models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Robust wav2vec2 [24] explores the effectiveness of CPT for domain adaptation, indicating the importance of utilizing unlabeled in-domain data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In CASTLE [42], CPT is combined with an online pseudolabeling strategy for domain adaptation of wav2vec2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Cross-dataset evaluation for popular English speech corpora indicates that CPT helps to reduce the error rate in the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In [43] and [11] CPT is utilized for cross-lingual adaptation of wav2vec2 for Korean and Ainu respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Notably for Ainu, which is an endagered language, CPT has resulted in significant system Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Target-domain adaptation through self-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the left we see the general pre-training stage of XLSR-53 using the self-supervised loss Ls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' General pre-training is performed on 56, 000 hours of audio in 53 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the right, we see the proposed domain-adaptive finetuning stage, where the speech recognition task is learned using transcribed source domain data, while adaptation to the target domain is performed by including the self-supervised loss over (audio-only) source and target domain data improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' DeHaven and Jayadev [44] compare CPT and pseudolabeling for adapting XLSR-53 to four under-resourced languages, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Georgian, Somali, Tagalog and Farsi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' They find that both approaches yield similar improvements, with CPT being the more computationally efficient approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' While CPT yields significant improvements in a variety of tasks, one common theme in these works is the assumption of hundreds or thousands of hours of available in-domain data, mostly from online resources, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', YouTube.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This can be infeasible when we consider more niche adaptation settings, or possible privacy concerns, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', how would one collect 1000 hours of psychotherapy sessions in Greek?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this work, we explore domain adaptation methods in a more resource- constrained environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' DOMAIN ADAPTATION THROUGH MULTI-DOMAIN SELF-SUPERVISION The proposed approach is based on end-to-end adaptation of a large pre-trained speech model during the finetuning phase, by including in-domain self-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We extend UDALM [45], that has shown promise for NLP tasks, for adaptation of wav2vec2 based acoustic models, and specifically XLSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We focus on the problem of UDA in the context of a low-resource language, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Greek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The key finding of our exploration is that straight-forward extension of UDALM, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', by using only target domain self-supervision, underperforms in this setting, and use of both source and target domain data is essential for successful adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this section, first, we will present a quick overview of the XLSR-53 training procedure, and then we are going to outline the proposed domain adaptation approach, which is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' XLSR-53 XLSR-53 [21] is a massively pre-trained speech model, trained on 56, 000 hours of multilingual speech, covering 53 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The model is based on wav2vec2 [20], which is composed of a multi-layer convolutional feature encoder, that General Pretraining Finetuning Ls LCTC Ls Masked Transformer Masked Transformer XLSR XLSR MLS, CommonVoice and BABEL Source Domain Target Domain 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0000 of speech data from 53 languages5 TABLE II THE GREC-MD CORPUS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' WE CAN SEE THE DURATION OF EACH SPLIT IN H O U R S:M I N U T E S:S E C O N D S FORMAT, AS WELL AS THE NUMBER OF SPEAKERS FOR EACH OF THE SUB-CORPORA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Dataset Domain Speakers Train Dev Test Total Duration HParl Public (political) speech 387 99:31:41 9:03:33 11:12:28 119:47:42 CV Crowd-sourced speech 325 12:16:17 1:57:44 1:59:19 16:13:20 Logotypografia News casts 125 51:58:45 9:08:35 8:59:22 70:06:42 Total 713 163:46:43 20:09:52 22:11:44 206:08:19 extracts audio features zt from the raw audio, and a trans- former context encoder that maps the latent audio features to the output hidden states ct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Each latent feature zt corresponds to 25 ms of audio with stride 20 ms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A contrastive objective Lc is used for pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For this, product quantization [60] is applied to the features zt, and then a discrete approximation of zt is obtained by sampling from a Gumbel-softmax distribution [61], to obtain discrete code vectors qt, organized into G = 2 codebooks with V = 320 vocabulary entries each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The contrastive loss aims to identify the correct code vector for a given time step, among a set of distractors Qt, obtained through negative sampling from other timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' To avoid mode collapse, a diversity loss Ld is included by maximizing the entropy over the averaged softmax distribution over the code vector entries ¯pg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The total loss is: Ls = −log es(zt,qt) � ˜q∼Qt es(zt,˜q) � �� � Contrastive Loss Diversity Loss � �� � − 1 GV G � g=1 V � v=1 ¯pg,vlog(¯pg,v) (3) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Domain Adaptive finetuning for Contrastive Learning of Speech Representations Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1 shows the proposed finetuning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The key intuition is that we want the model to synergistically learn the task at hand (in our case ASR), while being adapted to the target domain by in-domain self-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the left we see the general pre-training stage of XLSR-53, which is pre-trained on 56K hours of multilingual audio corpora using the contrastive pre-training objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the right we see the proposed finetuning stage, which is inspired by [45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' During finetuning we form a mixed objective function: L = LCT C(xs, ys) + αLs(xs) + βLs(xt), (4) where (xs, ys) ∼ S(x, y), xt ∼ T (x), LCT C is the CTC objective function, optimized using transcribed source domain data, and Ls is the contrastive loss from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We scale the contribution of each term using hyper-parameters α and β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Note that contrary to [45], who use only in-domain self- supervision, we leverage both source and target domain sam- ples for the mixed self-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We find that this is essen- tial in our case to avoid mode collapse, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', the model using only a few of the available discrete code vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Simultaneous self-supervision on both the source and target data alleviates mode collapse by anchoring the target code vector space to have a similar structure as the source code vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Hence we refer to this approach as Mixed Multi-Domain Self-Supervision (M2DS2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' THE GREC-MD CORPUS For our experiments we compose a speech corpus for the Greek language, that is suitable for multi- and cross-domain evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The GREC-MD corpus contains 206 hours of Greek speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Audio is segmented into individual utterances and each utterance is paired with its corresponding tran- scription.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Table II summarizes the included sub-corpora, as well as the train, development and test splits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The dataset is constructed with three core principles in mind: 1) Data Volume: We collect the largest publicly available speech recognition corpus for the Greek language, able to scale to hundreds of hours of transcribed audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2) Temporal Relevance: Language changes over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We aim at an up-to-date corpus that encompasses the latest terms and topics that appear in daily speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3) Multi-Domain Evaluation: Single domain evaluation can lead to misleading estimations of the expected performance for ASR models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For example, state-of- the-art ASR models [27] achieve under 5% Word Error Rate (WER) on Librispeech [62] test sets, but this is an over-estimation of system performance in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This is extenuated when considering different acoustic conditions or terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We consider multi-domain evaluation essential when developing and deploying real-world ASR models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' To satisfy the first two points, we collect data from a public, continuously updated resource, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', the Hellenic Parliament Proceedings, where recordings of the parliamentary sessions are regularly uploaded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The benefit of using this resource is the straight-forward collection of a continuously growing, multi- speaker corpus of transcribed audio that is always up-to-date, as the parliamentary discussions revolve around current affairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We refer to this corpus as HParl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For the multi-domain evalua- tion, we merge HParl with two publicly available corpora, that have different acoustic and language characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We refer to the merged, multi-domain corpus as GREC-MD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this Section, we will describe the collection and curation process of HParl, and present the relevant statistics for the experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' TABLE III PLENARY SESSIONS INCLUDED IN HPARL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' THE HOURS COLUMN REFERS TO THE RAW (UNSEGMENTED) HOURS OF COLLECTED AUDIO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Start date End date #Sessions Hours 15-02-2022 01-03-2022 10 55 18-01-2019 01-02-2019 10 52 28-03-2019 10-05-2019 20 108 10-12-2018 21-12-2018 10 88 6 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Overview of the Hellenic Parliament Chamber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The chamber has an amphitheatrical shape and can accomodate approximately 400 − 450 people.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The positions of the key speakers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', current speaker and the parliament president are annotated in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Collection and Curation of HParl Modern technological advances allow for more direct gov- ernment transparency, through the commodification of storage and internet speeds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this spirit, the records of plenary ses- sions of the Hellenic Parliament are made publicly available, for direct access through a webpage3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The available video recordings date back to 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For each plenary session, a video recording is uploaded, along with a full transcription that is recorded verbatim, and in real time by the parlia- ment secretaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For the creation of HParl, we build a web- crawler that can traverse and download the video recordings, along with the transcriptions from the official website.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The collection process is parallelized over multiple threads, and parameterized by a range of dates and, optionally, a target corpus size in GB or in hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For this version of HParl, we collect the plenary sessions in four date ranges, as described in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The majority of the collected sessions are from 2019, but we also include sessions from 2018 and 2022 to include coverage of different topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The individual components of the HParl curation pipeline are: Audio Pre-processing, Text Pre- processing, Alignment, Post-processing, and dataset Splitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1) Audio Pre-processing: Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2 shows the layout of the Hellenic Parliament Chamber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Plenary sessions mainly take place in this room, or in the secondary House Chamber that has similar setup but is smaller in size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Because of the room and microphone characteristics, the captured audio in the video streams contains reverberation, due to sound reflections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We employ a light preprocessing pipeline, by passing the input video streams through FFmpeg, and converting them to monophonic, lossless audio format at 16000 Hz sampling rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The resulting audio is not passed through any de-reverberation or speech enhancement software.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The resulting audio files have a minimum, average and maximum duration of 6 minutes, 6 hours and 16 hours respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2) Text Pre-processing: The text files contain full, word- by-word transcription of the speeches and questions asked by members of the audience, as well as extra annotations made by the parliament secretaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Some annotations are relevant, 3https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='hellenicparliament.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='gr/en/ i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', the speaker name, while others are plain text descriptions of events happening during the session and need to be filtered out (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “The session is interrupted for a 15 minute break”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We use a rule-based system, based on regular expressions, that filters the unnecessary information, keeping only the transcriptions and the speaker names.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The speaker labels are created by transliterating their names and roles from Greek to Greeklish using the “All Greek to Me!”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' tool [63].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Text is lower-cased and normalized to remove multiple whitespaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The result is a text file containing the raw transcriptions, and a mapping from speaker labels to their respective text parts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3) Aligment and Segmentation: The primary challenge of exploiting the plenary sessions for ASR purposes is the length of the plenary recordings, as their durations vary from 6 minutes to 16 hours in length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' However, data samples used to train ASR are generally less than 30 seconds long.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Computa- tional challenges have limited the length of training utterances for HMM-GMM models [64], and continue to do so in the contemporary neural network models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Therefore, we need to segment the sessions into smaller pieces more suitable for ASR training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A second challenge is posed by mismatches between audio and transcripts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Parliamentary proceedings do not fully capture everything that is said during the parliamentary ses- sions, and do not account for speech disfluencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In order to obtain smaller, clean segments, that are suit- able for ASR training we follow the segmentation procedure proposed by [65].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Initially the raw recordings are segmented into 30 second segments and the transcriptions are split into smaller segments of approximately 1000 words called documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Each segment is decoded using a seed acoustic model trained on the Logotypografia corpus [66] and a 4- gram biased LM trained on the corresponding transcription of each recording.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The best path transcript of each segment is obtained and paired with the best matching document via TF-IDF similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Finally each hypothesis is aligned with the transcription using Smith-Waterman alignment [67] to select the best matching sub-sequence of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The above method yields a list of text utterances, with their corresponding start and end times in the source audio files.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The procedure yields 120 hours of useable segmented utterances out of the original 303 hours of raw audio, or a ratio of 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='6%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4) Post-processing: After the segments are extracted, we filter out extremely short segments (less than 2 words).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Moreover, the iterative alignment algorithm may replace some intermediate words with a tag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' When this tag is inserted, we match the surrounding text with the raw transcriptions and re-insert the missing words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, we match each segment to its corresponding speaker label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Segments without a speaker label are discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Lastly, speak- ers are associated to their gender based on name suffixes, using a simple, Greek language-specific, rule: Speaker names which end in a(α), h(η), w(ω) or is(ις) are classified as female, while the rest as male.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We format the segments, speaker and gender mappings in the standard folder structure used by the Kaldi speech recognition toolkit [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5) Data Splitting: We provide an official train - devel- opment - test split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The development set contains 3 plenary sessions, one from 2018, one from 2019 and one from 2022, Current Speaker Parliament President7 resulting to 9 hours of segmented speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Similarly, the test set contains one session from each year, resulting to 11 hours of segmented speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The rest 99 hours of segmented speech are assigned to the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Including corpora from different domains We merge HParl with two publicly available corpora to create GREC-MD for multi-domain evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1) Common Voice: Common Voice (CV) [68] is a crowd- sourced, multi-lingual corpus of dictated speech, created by Mozilla.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The data collection is performed by use of a web app or an iPhone app.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Contributors are presented with a prompt and are asked to read it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The prompts are taken from public domain sources, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', books, wikipedia, user submitted prompts and other public corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The maximum prompt length is 15 words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A rating system is built into the plat- form, where contributors can upvote or downvote submitted pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A pair is considered valid, if it receives two upvotes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Speaker independent train, develop- ment and test splits are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The dataset is open to the research community, released under a permisFsive Creative Commons license (CC0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this work, we use version 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0 of CV, accessed on April 27, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We keep only the valid utterances, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', 16 hours of speech from 325 contributors (19 − 49 years old, 67% male / 23% female).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2) Logotypografia: Logotypografia [66] is one of the first corpora for Large Vocabulary Continuous Speech Recognition in Greek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The dataset contains 33, 136 newscast utterances, or 72 hours of speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The utterances were collected from 125 speakers (55 male, 70 female), who were staff of the popular “Eleftherotypia” newspaper in Greece, under varied acoustic conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Approximately one third of the utterances were collected in a sound proof room, one third in a quiet room and the last third in an office room.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The average utterance duration is 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='8 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The transcriptions contain several speech and non-speech events (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', ), lower-cased Greek words and stress marks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Numbers are expanded to full words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We use the whole dataset, and perform light preprocessing in the transcriptions, by discarding the annotated events and punctuation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We hence refer to each dataset by the abbreviations: HParl: HP, CommonVoice: CV, Logotypografia: LG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' EXPERIMENTAL SETTINGS For our experiments we use the following hyper-parameter settings, unless explicitly stated otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For model training, we use AdamW optimizer [69] with learning rate 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We apply warmup for the first 10% of the maximum training steps, and a linear learning rate decay after that.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Models are finetuned for a maximum of 10000 steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For speech recognition training, we make use of the Connectionist Tem- poral Classification (CTC) loss [70], optimized using the available transcribed data in each scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Validation runs every 500 steps on the development set, and early stopping is employed on the development CTC loss with patience 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Batch size is set to 8 during finetuning for all scenarios, except for M2DS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the case of M2DS2 we create mixed batches of size 12, containing 4 transcribed source domain samples and 8 unlabeled target domain samples and train for 10, 000 CTC updates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For memory reasons we split the mixed batches in mini-batches of 4 and interleave them during model training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Gradients are accumulated over 3 interleaved batches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For the self-supervised objective, we create masks of maximum timestep length 10, with masking probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We weigh the contributions of the source and target domain contrastive objectives, and bring them to the same order of magnitude as the CTC loss, by setting α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='01 and β = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The convolutional feature encoder is kept frozen for all experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Our code is based on the huggingface 4 implementation of XLSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For all experiments we resample the audio files to 16 kHz and downsample to single channel audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We exclude utterances in the training set that are longer than 12 seconds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' All experiments are run on a single NVIDIA RTX 3090 GPU, with mixed precision training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For the Language model training, we create a large corpus for the Greek language using a subset of the Greek part of CC- Net [71] (approximately 11 billion tokens) and combine it with 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='5 billion tokens from the Greek version of Wikipedia and the Hellenic National Corpus (HNC) [72].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' During preprocessing, we remove all punctuation and accents, deduplicate lines and convert all letters to lowercase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We will refer to this corpus as the Generic Greek Corpus (GGC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We train a 4-gram language model on GGC using KenLM [73] and prune bigrams, trigrams and four-grams with counts less than 3, 5 and 7 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We incorporate the n-gram LMs at inference time using the pyctcdecode framework5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We use language model rescoring over a beam search decoder with 13 beams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The evaluation metric is the Word Error Rate (WER) over the target test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For assessing the adaptation effectiveness we also report the relative WER improvement over the unadapted baseline in appropriate scenarios, which is defined in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We refer to this metric as Relative Adaptation Improvement (RAI) for the rest of this paper: RAI = −WERadapted − WERunadapted WERunadapted × 100% (5) The minus sign is included, so that RAI takes negative values when the adaptation fails, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', when WERunadapted < WERadapted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' TABLE IV ASR PERFORMANCE OF XLSR-53 OVER THE THREE CORPORA FOR FULLY SUPERVISED IN-DOMAIN FINETUING (WER) Dataset LM No LM 4g GGC HP 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='21 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='64 CV 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='33 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='52 LG 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='94 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='45 VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' SUPERVISED IN-DOMAIN TRAINING In the first set of experiments, we explore the performance of supervised finetuning of XLSR-53 for each domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This 4https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='co/docs/transformers/ 5https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='com/kensho-technologies/pyctcdecode 8 TABLE V M2DS2 PERFORMANCE USING GREEDY DECODING FOR UDA BETWEEN HP, CV, AND LG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' A → B INDICATES THAT A IS THE SOURCE DOMAIN AND B IS THE TARGET DOMAIN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (G) INDICATES GREEDY DECODING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (LM) INDICATES BEAM SEARCH WITH LM RESCORING.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' WE REPORT THE WER ON THE TARGET TEST SET, AS WELL AS THE RAI (%) OVER THE SO (UNADAPTED) BASELINE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' WER: LOWER IS BETTER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' RAI: HIGHER IS BETTER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Method SO (G) CPT (G) PSL (G) M2DS2 (G) SO (LM) CPT (LM) PSL (LM) M2DS2 (LM) Setting WER WER RAI WER RAI WER RAI WER WER RAI WER RAI WER RAI HP → CV 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='68 −6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='8 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='95 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='26 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='44 −4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='7 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='24 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='35 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 HP → LG 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='65 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='63 −8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='68 −18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='6 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='99 −21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='34 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='27 −6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='32 −29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='58 −7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 LG → CV 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='57 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='43 −13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='90 −39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='31 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='96 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='51 −21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='05 −100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='5 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='30 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 LG → HP 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='13 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='51 −8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='7 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='46 −15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='09 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='48 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='58 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='36 −44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='1 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 CV → LG 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='55 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='12 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='34 −2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='6 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='40 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='8 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='80 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='40 −3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='68 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='93 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 CV → HP 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='72 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='83 −4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='05 −10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='70 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='9 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='09 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='18 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='82 −5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='88 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='6 will give an upper bound estimation for UDA performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We finetune XLSR-53 on CV, HP and LG (separately) and perform in-domain evaluation on the respective test sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Results are summarized in Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The first row indicates the performance of greedy decoding, while in the second row we report the performance of the beam search decoder, rescored using the scores of the 4-gram GGC language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We observe that the greedy decoding performance is under 30 WER for both HP and CV, while for LG we achieve ∼ 32 WER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This makes sense, as LG is the most diverse dataset, with respect to the included acoustic conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, we observe that the incorporation of a language model results in an impressive WER reduction on CV, followed by HP and then LG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' While CV includes relatively simple phrases with common vocabulary, HP and LG contain more specialized terminology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' UNSUPERVISED DOMAIN ADAPTATION USING IN-DOMAIN AUDIO Here, we evaluate the effectiveness of M2DS2 for UDA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We compare with three baselines: 1) Source Only Training (SO): We perform supervised finetuning of XLSR-53 (CTC) using only the source- domain data, and run decoding on the target domain test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' No in-domain data are used for adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2) Continual Pre-Training (CPT): We perform a pre- training phase using the loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (3) on the target domain train set, to create adapted versions of XLSR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Pre-training is run for 20000 steps with batch size 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Only the audio is used, without transcriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The adapted checkpoints are then finetuned by use of CTC loss on the source domain transcribed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Evaluation is performed on the target test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3) Pseudolabeling (PSL): We finetune XLSR-53 using the source domain data with CTC loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Then we run infer- ence on the source model, to extract silver transcriptions for the target domain training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We use the silver transcriptions for supervised finetuning on the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In Table V we compare M2DS2 with the SO, CPT and PSL baselines for six adaptation scenarios, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', cross dataset evaluation between the three datasets in GREC-MD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The left half corresponds to greedy decoding, while for the right half we use the 4-gram LM trained on GGC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' First, we observe the SO model performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The SO models are the finetuned Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Performance of M2DS2 (blue line) for the LG → CV setting, when reducing the amount of available target samples to 50%, 25%, and 10% of the original dataset (horizontal axis).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' SO performance is indicated with the orange line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Vertical axis: WER, Horizontal Axis: target audio percentage (100% → 0%) models from Table IV, evaluated in out-of-domain settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We see that out-of-domain evaluation results in a large perfor- mance hit, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', while in the CV9 → CV9 in-domain setting we achieve 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='33 WER, in the CV9 → HP out-of-domain setting we get 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='55 WER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This confirms that for real-world ASR tasks, multi-domain evaluation is of essence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Second, we observe that in most adaptation scenarios both CPT and PSL fail to surpass the SO (unadapted) baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the case of CPT, we hypothesize that is due to the relatively data constrained version of our setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the best-case scenario, we have 99 hours of available target domain audio, which is not enough to perform a discrete CPT stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Note that most of works in the literature use ∼ 1000 hours of target audio for CPT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the case of PSL, the poor performance is due to the quality of the silver labels created by the seed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' While the performance would improve with more elaborate approaches (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', confidence filtering), in challenging adaptation scenarios PSL approaches are limited by the SO model’s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Lastly, we observe that M2DS2 is the only approach among our baselines that manages to achieve a positive RAI in most adaptation scenarios, by consistently outperforming the SO baseline by significant margins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This is exaggerated when we include a LM during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' One exception in this pattern is the HP → LG scenario, where the SO baseline achieves the best performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We attribute this to the fact that we performed minimal hyper-parameter tuning during model development.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 65 60 WER 55 50 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Percentage of In-Domain Audio Data9 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The sample efficiency of M2DS2 One key observation in the literature, and in our experiments is that CPT requires a large amount of un-transcribed target domain audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This raises the question, can we leverage self- supervision for domain adaptation in data constrained settings?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3 we evaluate the performance of M2DS2, when we reduce the amount of target domain audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Specifically we focus on the scenario of LG → CV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The full training corpus of CV contains 12 hours of audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We train M2DS2 with 50%, 25% and 10% of the available samples, or 6, 3 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='2 hours of audio respectively, and plot the resulting WER on the target (CV) test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In all cases, the full source (LG) training corpus is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We observe that M2DS2 achieves lower WER than the SO baseline, even with only 3 hours of target domain audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' While CPT can suffer from catastrophic forgetting, as most multi-stage training approaches, M2DS2 avoids this issue, being a single-stage approach with a mixed task-specific and self-supervised objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This provides a promising avenue for adaptation, when collection of in-domain recordings is expensive, or infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (a) Only target domain self-supervision (b) Target and source domain self-supervision Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' T-SNE scatter plots of code vectors extracted from M2DS2 without source domain self-supervision (top) and with source domain self-supervision (bottom) for LG (red) and CV (teal) B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The importance of Multi-Domain Self-Supervision In Section III-B we argue that it is essential to include both source and target domain data for the self-supervised objective of M2DS2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' To illustrate the effect of this approach, we train two versions of M2DS2 for the LG → CV scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For the TABLE VI LANGUAGE ADAPTATION OF THE M2DS2 LG → CV MODEL, USING BIASED AND AUGMENTED LMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' WE USE THE VARIANT OF THE MODEL TRAINED WITH 3 HOURS OF IN-DOMAIN AUDIO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' WE VARY THE AMOUNT OF IN-DOMAIN TEXT DATA FROM 752K TOKENS TO 38K TOKENS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Biased LM Augmented LM 100% 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='22 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='84 50% 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='13 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='05 25% 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='84 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='64 10% 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='75 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='47 5% 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='04 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='31 Baseline (M2DS2 + Generic LM) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='7 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Language-only adaptation for LG → HP using the SO model finetuned on LG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In-domain text data range from 11M tokens (left) to 110K tokens (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Blue/dashed: Baseline with generic LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Purple/circles: Biased LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Orange/diamonds: Augmented LM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' first version we set α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='01, while for the second we set α = 0, removing the second term of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We extract the code vectors for the first 100 samples of both LG and CV, and flatten them across the time steps , resulting to 60000 × 768 code vectors corresponding to individual timesteps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We plot these code vectors using T-SNE [74] in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4 for both models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We see that when we do not include the source domain self- supervision, the code vector space collapses in a few tight clusters, and most audio segments correspond to just a few code vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' This is a visual clue that indicates the mode collapse problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' When we include the source domain term, we see that the that the code vector space has more structure, and coverage of the space is more complete, both for CV (target domain) and LG (source domain).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Experimentally we train M2DS2 with α = 0 for all source / target domain pairs and we find that the mode collapse is destructive for target domain performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' During our experiments we got WER in the range 80−99, indicating failure to converge to acceptable solutions across all scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' The simple inclusion of both source and target domain self supervision stabilizes training, avoids mode collapse and leads to successful unsupervised adaptation between domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' UNSUPERVISED AND WEAKLY SUPERVISED LANGUAGE ADAPTATION When small amounts of in-domain textual data are avail- able, simple N-gram LM adaptation techniques can be very effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In this brief set of experiments, we first explore the unsupervised language adaptation setting, where no in- .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' : :· .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0 C .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 008 : .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='80 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' o43 41 39 37 WER 35 33 31 29 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Percentage of In-Domain Text Data10 TABLE VII CLOSING THE GAP BETWEEN SO TRAINING AND FULLY SUPERVISED TRAINING FOR THE LG → CV ADAPTATION SCENARIO USING M2DS2, WITH VARYING AMOUNTS OF AVAILABLE UNPAIRED IN-DOMAIN AUDIO AND TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (U): UNSUPERVISED ACOUSTIC OR LANGUAGE ADAPTATION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' (W): WEAKLY SUPERVISED ADAPTATION.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Method #Audio (h) #Tokens LM WER SO (U) N/A 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='57 M2DS2 (U) 3 N/A 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='31 M2DS2 (U) 12 N/A 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='31 SO (U) Generic 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='96 SO (U) 38, 632 Augmented 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='67 SO (U) 751, 953 Augmented 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='46 M2DS2 (U) 3 Generic 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='7 M2DS2 (U) 12 Generic 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3 M2DS2 (W) 3 38, 632 Augmented 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='31 M2DS2 (W) 12 38, 632 Augmented 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='29 M2DS2 (W) 3 751, 953 Augmented 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='84 M2DS2 (W) 12 751, 953 Augmented 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='61 Supervised 12 751, 953 Generic 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='52 Supervised 12 751, 953 Augmented 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='94 domain audio is used, and then we relax the problem to the weakly supervised setting, where M2DS2 is combined with the adapted N-Gram LMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' These settings are described in Sections II-A2 and II-A3 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We explore two approaches for LM adaptation: biased LMs, and in-domain data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' To create biased LMs, we train a 4-gram LM on the available in-domain data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Then we replace the generic LM trained on GGC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For LM data augmentation we follow a perplexity filtering approach similar to [71].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We first train a biased LM using available target domain text, and then use it to calculate the perplexity of each line in the GGC corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We keep the 10% of the lines with the lowest perplexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Then we train a 4-gram LM on the augmented “in- domain” corpus and use it for inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5 shows the performance of the SO LG → HP model with biased and augmented LMs, as we reduce the amount of available in-domain text data from 100% to 1% of the in-domain transcriptions (11B tokens to 110K tokens respec- tively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' As a baseline we include the LG → HP SO model in combination with the generic LM trained on GGC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We observe that the use of biased LMs can lead to successful adaptation, when an adequate amount of in-domain text data is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' On the other hand the LM augmentation approach results to successful augmentation, even with very small amounts of in- domain text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In Table VI we see the results of LM adaptation, combined with the M2DS2 LG → CV model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' To demonstrate the sample efficiency of the approach, we use the variant that was trained using only 25% of the target domain audio (3 hours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We compare with M2DS2 combined with the 4-gram GGC LM for inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We draw similar conclusions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', use of biased LMs performs well for sufficient text data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' When we use augmented LMs we can leverage very small amounts of in-domain text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' IX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' DISCUSSION & CONCLUSIONS In this work, we have explored Unsupervised and Weakly Supervised Domain Adaptation of ASR systems in the con- text of an under-resourced language, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Greek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We focus on domain adaptation through in-domain self-supervision for XLSR-53, a state-of-the-art multilingual ASR model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Specif- ically, we adopt a mixed task and self-supervised objective, inspired from NLP, and show that using only in-domain self- supervision can lead to mode collapse of the representa- tions created by the contrastive loss of XLSR-53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Therefore, we propose the use of mixed task and multi-domain self- supervision, M2DS2, where the contrastive loss leverages both the source and target domain audio data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' For evaluation we create and release HParl, the largest to-date public corpus of transcribed Greek speech (120 hours), collected from the Greek Parliamentary Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' HParl is combined with two other popular Greek speech corpora, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Logotypografia and CommonVoice, for multi-domain evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In our experiments, we find that while most UDA baselines fail in our low-resource setting, the proposed mixed task and multi-domain self-supervised finetuning strategy yields significant improvements for the majority of adaptation sce- narios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, we focus our ablations on showcasing the sample efficiency of the proposed finetuning strategy, and demonstrating the necessity of including both source and target domain data for self-supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Finally, we show that M2DS2 can be combined with simple language model adaptation techniques in a relaxed weakly supervised setting, where we achieve significant performance improvements with a few hours of in-domain audio and a small, unpaired in- domain text corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' More concretely, in Table VII we present a summary of the discussed unsupervised and weakly supervised adaptation combinations, for different amounts of available in-domain audio and text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Note that for the weakly supervised scenarios, the in-domain audio and text are unpaired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' We see, that when no in-domain data are available, including an n-gram LM trained on large corpora is recommended.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Furthermore, when in-domain audio is available, following a mixed multi-domain finetuning strategy using M2DS2 can yield significant WER reductions, even for a few hours of audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' When small amounts of in-domain text is available, using a corpus augmentation strategy, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', perplexity filtering, can produce adapted LMs and yield small improvements to the final WER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' In the case of sufficient amounts of unpaired in-domain text and audio, independent adaptation of XLSR-53 using the audio data and the n-gram LM using the text data can yield comparable performance with a fully supervised finetuning pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' FUTURE WORK In the future we plan to explore the effectiveness of the proposed adaptation strategy for other languages, and different adaptation settings, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', accent or cross-lingual adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Of special interest is the investigation of the effectiveness of our approach for endagered languages, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', Pomak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Fur- thermore, we plan to explore the combination of in-domain self-supervision, when combined with other popular UDA techniques, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', teacher student models, adversarial learning, and data augmentation approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' On the language adaptation side, we plan to explore multi-resolution learning, which has 11 shown promise for ASR [75], and investigate more elaborate end-to-end weakly supervised adaptation methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Finally, we plan to expand our study in a multimodal setting, where both audio and video are available, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', lip reading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' REFERENCES [1] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan, “Learn- ing transferable features with deep adaptation networks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' PMLR, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 97–105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [2] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Franc¸ois Laviolette, Mario Marchand, and Victor Lempitsky, “Domain-adversarial training of neural networks,” J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 17, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2096–2030, jan 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [3] Peter Bell, Joachim Fainberg, Ondrej Klejch, Jinyu Li, Steve Renals, and Pawel Swietojanski, “Adaptation algorithms for neural network- based speech recognition: An overview,” IEEE Open Journal of Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 33–66, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [4] Jinyu Li, Michael L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Seltzer, Xi Wang, Rui Zhao, and Yifan Gong, “Large-scale domain adaptation via teacher-student learning,” in Proc Interspeech, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [5] Vimal Manohar, Pegah Ghahremani, Daniel Povey, and Sanjeev Khu- danpur, “A teacher-student learning approach for unsupervised domain adaptation of sequence-trained asr models,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Spoken Language Technology Workshop (SLT), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [6] Yusuke Shinohara, “Adversarial Multi-Task Learning of Deep Neural Networks for Robust Speech Recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech 2016, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2369–2372.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [7] Zhong Meng, Jinyu Li, Zhuo Chen, Yang Zhao, Vadim Mazalov, Yifan Gong, and Biing-Hwang Juang, “Speaker-invariant training via adversarial learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2018, IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [8] Sining Sun, Ching-Feng Yeh, Mei-Yuh Hwang, Mari Ostendorf, and Lei Xie, “Domain adversarial training for accented speech recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4854–4858.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [9] Anoop C S, Prathosh A P, and A G Ramakrishnan, “Unsupervised domain adaptation schemes for building asr in low-resource languages,” in Automatic Speech Recognition and Understanding Workshop (ASRU), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 342–349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [10] Taichi Asami et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “Domain adaptation of dnn acoustic models using knowledge distillation,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5185–5189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [11] Karol Nowakowski, Michal Ptaszynski, Kyoko Murasaki, and Jagna Nieuwa˙zny, “Adapting multilingual speech representation model for a new, underresourced language through multilingual fine-tuning and continued pretraining,” Information Processing & Management, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 60, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 103148, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [12] Sadaoki Furui, “A training procedure for isolated word recognition sys- tems,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 28, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 129–136, 1980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [13] Yajie Miao, Hao Zhang, and Florian Metze, “Towards speaker adaptive training of deep neural network acoustic models,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2189–2193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [14] Sree HK Parthasarathi e al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “fmllr based feature-space speaker adapta- tion of dnn acoustic models,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3630–3634.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [15] Vishwa Gupta, Patrick Kenny, Pierre Ouellet, and Themos Stafylakis, “I-vector-based speaker adaptation of deep neural networks for french broadcast audio transcription,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [16] Hans-G¨unter Hirsch and David Pearce, “The aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions,” in ASR2000-Automatic speech recognition: challenges for the new Millenium ISCA tutorial and research workshop (ITRW), 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [17] Yanmin Qian, Tian Tan, and Dong Yu, “An investigation into using parallel data for far-field speech recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [18] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, “Listen, attend and spell: A neural network for large vocabulary conversational speech recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4960–4964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [19] Alex Graves, “Sequence transduction with recurrent neural networks,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' abs/1211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='3711, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [20] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli, “wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0: A framework for self-supervised learning of speech representations,” Advances in Neural Information Processing Systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 33, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 12449–12460, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [21] Alexis Conneau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “Unsupervised Cross-Lingual Representation Learning for Speech Recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2426– 2430.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [22] Pavel Denisov, Ngoc Thang Vu, and Marc Ferras Font, “Unsupervised domain adaptation by adversarial learning for robust speech recognition,” in Speech Communication, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [23] Dongseong Hwang, Ananya Misra, Zhouyuan Huo, Nikhil Siddhartha, Shefali Garg, David Qiu, Khe Chai Sim, Trevor Strohman, Franc¸oise Beaufays, and Yanzhang He, “Large-scale asr domain adaptation using self- and semi-supervised learning,” in NeurIPS, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 6627–6631.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [24] Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhoma- nenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, and Michael Auli, “Robust wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0: Analyzing Domain Shift in Self-Supervised Pre-Training,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 721–725.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [25] Sameer Khurana, Niko Moritz, Takaaki Hori, and Jonathan Le Roux, “Unsupervised domain adaptation for speech recognition via uncertainty driven self-training,” Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 6553–6557, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [26] Sankaran Panchapagesan, Daniel S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, and Alexander Gruenstein, “Efficient knowl- edge distillation for rnn-transducer models,” in Proc ICASSP, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5639–5643.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [27] Anmol Gulati et al, “Conformer: Convolution-augmented transformer for speech recognition,” Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5036–5040, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [28] Hasim Sak, Andrew W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Senior, and Franc¸oise Beaufays, “Long short- term memory recurrent neural network architectures for large scale acoustic modeling,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 338–342.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [29] Taichi Asami, Ryo Masumura, Yoshikazu Yamaguchi, Hirokazu Masa- taki, and Yushi Aono, “Domain adaptation of dnn acoustic models using knowledge distillation,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5185–5189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [30] Takuya Yoshioka, Nobutaka Ito, Marc Delcroix, Atsunori Ogawa, Keisuke Kinoshita, Masakiyo Fujimoto, Chengzhu Yu, Wojciech J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Fabian, Miquel Espi, Takuya Higuchi, Shoko Araki, and Tomohiro Nakatani, “The ntt chime-3 system: Advances in speech enhancement and recognition for mobile multi-microphone devices,” in Workshop on Automatic Speech Recognition and Understanding (ASRU), 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 436–443.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [31] Bhuvana Ramabhadran, Brian Farris, Isabel Leal, Manasa Prasad, Neeraj Gaur, Parisa Haghani, Pedro Jose Moreno Mengibar, and Yun Zhu, “Self-adaptive distillation for multilingual speech recognition: Leverag- ing student independence,” in Interspeech 2021, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [32] Yanzhang He, Tara N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Sainath, Rohit Prabhavalkar, Ian McGraw, Raziel Alvarez, Ding Zhao, David Rybach, Anjuli Kannan, Yonghui Wu, Ruoming Pang, Qiao Liang, Deepti Bhatia, Yuan Shangguan, Bo Li, Golan Pundak, Khe Chai Sim, Tom Bagby, Shuo-yiin Chang, Kanishka Rao, and Alexander Gruenstein, “Streaming end-to-end speech recog- nition for mobile devices,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 6381–6385.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [33] Dmitriy Serdyuk, Kartik Audhkhasi, Philemon Brakel, Bhuvana Ramab- hadran, Samuel Thomas, and Yoshua Bengio, “Invariant representations for noisy speech recognition,” CoRR, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [34] Sining Sun, Binbin Zhang, Lei Xie, and Yanning Zhang, “An unsuper- vised deep domain adaptation approach for robust speech recognition,” Neurocomputing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 257, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 79–87, 2017, Machine Learning and Signal Processing for Big Multimedia Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [35] Vijayaditya Peddinti, Daniel Povey, and Sanjeev Khudanpur, “A time delay neural network architecture for efficient modeling of long temporal contexts,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3214–3218.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [36] Daniel Povey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “The kaldi speech recognition toolkit,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ASRU Workshop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2011, IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [37] Seyedmahdad Mirsamadi and John H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Hansen, “Multi-domain ad- versarial training of neural network acoustic models for distant speech recognition,” Speech Communication, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 106, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 21–30, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [38] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio, “Attention-based models for speech recog- nition,” Advances in neural information processing systems, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 28, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [39] Hu Hu, Xuesong Yang, Zeynab Raeesy, Jinxi Guo, Gokce Keskin, Harish Arsikere, Ariya Rastrow, Andreas Stolcke, and Roland Maas, “redat: Accent-invariant representation for end-to-end asr by domain adversarial training with relabeling,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [40] Aditay Tripathi, Aanchan Mohan, Saket Anand, and Maneesh Singh, “Adversarial learning of raw speech features for domain invariant speech recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5959–5963.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [41] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan, “Domain separation networks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' NIPS, Red Hook, NY, USA, 2016, NIPS’16, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 343–351, Curran Associates Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [42] Han Zhu, Gaofeng Cheng, Jindong Wang, Wenxin Hou, Pengyuan Zhang, and Yonghong Yan, “Boosting cross-domain speech recognition with self-supervision,” arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='09783, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 12 [43] Jounghee Kim and Pilsung Kang, “K-Wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='0: Automatic Speech Recognition based on Joint Decoding of Graphemes and Syllables,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4945–4949.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [44] Mitchell DeHaven and Jayadev Billa, “Improving low-resource speech recognition with pretrained speech models: Continued pretraining vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' semi-supervised training,” arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='00659, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [45] Constantinos Karouzos, Georgios Paraskevopoulos, and Alexandros Potamianos, “UDALM: Unsupervised domain adaptation through lan- guage modeling,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Association for Computational Linguistics: Human Language Technologies, Online, June 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2579–2590, Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [46] Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmid- huber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' of the 23rd Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' on Machine Learning, New York, NY, USA, 2006, Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICML, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 369–376, Association for Computing Machinery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [47] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Scudder, “Probability of error of some adaptive pattern-recognition machines,” IEEE Transactions on Information Theory, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 363–371, 1965.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [48] David Yarowsky, “Unsupervised word sense disambiguation rivaling supervised methods,” in Annu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Meeting of the Association for Compu- tational Linguistics, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [49] Ellen Riloff and Janyce Wiebe, “Learning extraction patterns for sub- jective expressions,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Empirical Methods in Natural Language Processing, 2003, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 105–112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [50] Dongseong Hwang, Khe Chai Sim, Yu Zhang, and Trevor Strohman, “Comparison of soft and hard target rnn-t distillation for large-scale asr,” 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [51] Jacob Kahn, Ann Lee, and Awni Hannun, “Self-training for end-to-end speech recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 7084–7088.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [52] Daniel S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Park et al, “Improved noisy student training for automatic speech recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [53] Yu Zhang, James Qin, Daniel S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Park, Wei Han, Chung-Cheng Chiu, Ruoming Pang, Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Le, and Yonghui Wu, “Pushing the limits of semi-supervised learning for automatic speech recognition,” 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [54] Daniel S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Cubuk, and Quoc V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Le, “SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Interspeech, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2613–2617.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [55] Yaroslav Ganin and Victor Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICML.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2015, ICML’15, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1180–1189, JMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [56] Douglas B Paul and Janet Baker, “The design for the wall street journal- based csr corpus,” in Speech and Natural Language: Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' of a Workshop Held at Harriman, New York, February 23-26, 1992, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [57] Siu-Kei Au Yeung and Man-Hung Siu, “Improved performance of aurora 4 using htk and unsupervised mllr adaptation,” in Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Spoken Language Processing, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [58] Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Smith, “Don’t stop pretraining: Adapt language models to domains and tasks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' of the 58th Annu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Meeting of the Association for Computational Linguistics, Online, July 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 8342–8360, Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [59] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “BERT: pre-training of deep bidirectional transformers for language understanding,” CoRR, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' abs/1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='04805, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [60] Herve Jegou, Matthijs Douze, and Cordelia Schmid, “Product quantiza- tion for nearest neighbor search,” IEEE transactions on pattern analysis and machine intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 33, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 117–128, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [61] Eric Jang, Shixiang Gu, and Ben Poole, “Categorical reparametrization with gumbel-softmax,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICLR, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [62] Vassil Panayotov, “Librispeech: an asr corpus based on public domain audio books,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICASSP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' IEEE, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5206–5210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [63] Aimilios Chalamandaris et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “All greek to me!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' an automatic greeklish to greek transliteration system,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' LREC, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [64] Carsten Meyer and Hauke Schramm, “Boosting hmm acoustic models in large vocabulary speech recognition,” Speech Communication, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 48, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 532–548, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [65] Vimal Manohar, Daniel Povey, and Sanjeev Khudanpur, “Jhu kaldi system for arabic mgb-3 asr challenge using diarization, audio-transcript alignment and transfer learning,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ASRU Workshop, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 346–352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [66] Vassilios Digalakis et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “Large vocabulary continuous speech recog- nition in greek: corpus and an automatic dictation system,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Eurospeech, 2003, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1565–1568.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [67] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Smith and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Waterman, “Identification of common molecular subsequences,” Journal of Molecular Biology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 147, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 195– 197, 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [68] Rosana Ardila et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “Common voice: A massively-multilingual speech corpus,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' LREC, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4218–4222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [69] Ilya Loshchilov and Frank Hutter, “Decoupled weight decay regulariza- tion,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [70] Alex Graves, Santiago Fern´andez, Faustino Gomez, and J¨urgen Schmid- huber, “Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' ICML, 2006, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 369–376.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [71] Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzm´an, Armand Joulin, and ´Edouard Grave, “Ccnet: Extracting high quality monolingual datasets from web crawl data,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' of the 12th Language Resources and Evaluation Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 4003–4012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [72] Nick Hatzigeorgiu, Maria Gavrilidou, Stelios Piperidis, George Carayan- nis, Anastasia Papakostopoulou, Athanassia Spiliotopoulou, Anna Vacalopoulou, Penny Labropoulou, Elena Mantzari, Harris Papageor- giou, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=', “Design and implementation of the online ilsp greek corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=',” in LREC, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [73] Kenneth Heafield, “Kenlm: Faster and smaller language model queries,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' of the 6th workshop on statistical machine translation, 2011, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 187–197.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [74] Laurens Van der Maaten and Geoffrey Hinton, “Visualizing data using t-sne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=',” Journal of machine learning research, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 11, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' [75] Georgios Paraskevopoulos, Srinivas Parthasarathy, Aparna Khare, and Shiva Sundaram, “Multimodal and multiresolution speech recognition with transformers,” in Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' of the 58th Annu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' Meeting of the Association for Computational Linguistics, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'} +page_content=' 2381–2387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAyT4oBgHgl3EQfdfed/content/2301.00304v1.pdf'}