diff --git "a/2NFLT4oBgHgl3EQfqi-E/content/tmp_files/load_file.txt" "b/2NFLT4oBgHgl3EQfqi-E/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/2NFLT4oBgHgl3EQfqi-E/content/tmp_files/load_file.txt" @@ -0,0 +1,1214 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf,len=1213 +page_content='Multilingual Sentence Transformer as A Multilingual Word Aligner Weikang Wang1∗ Guanhua Chen2∗ Hanqing Wang1 Yue Han1 Yun Chen1† 1Shanghai University of Finance and Economics 2Southern University of Science and Technology wwk@163.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='sufe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cn ghchen08@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='com {whq,hanyue}@163.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='sufe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cn yunchen@sufe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cn Abstract Multilingual pretrained language models (mPLMs) have shown their effectiveness in multilingual word alignment induction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' How- ever, these methods usually start from mBERT or XLM-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In this paper, we investigate whether multilingual sentence Transformer LaBSE is a strong multilingual word aligner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This idea is non-trivial as LaBSE is trained to learn language-agnostic sentence-level embeddings, while the alignment extraction task requires the more fine-grained word- level embeddings to be language-agnostic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We demonstrate that the vanilla LaBSE outperforms other mPLMs currently used in the alignment task, and then propose to finetune LaBSE on parallel corpus for further improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Experiment results on seven language pairs show that our best aligner outperforms previous state-of-the-art models of all varieties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In addition, our aligner supports different language pairs in a single model, and even achieves new state-of-the-art on zero-shot language pairs that does not appear in the finetuning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 1 Introduction Word alignment aims to find the correspondence between words in parallel texts (Brown et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' It is useful in a variety of natural language process- ing (NLP) applications such as noisy parallel cor- pus filtering (Kurfalı and Östling, 2019), bilingual lexicon induction (Shi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021), code-switching corpus building (Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020) and incorporating lexical constraints into neural machine translation (NMT) models (Hasler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Recently, neural word alignment approaches have developed rapidly and outperformed statistical word aligners like GIZA++ (Och and Ney, 2003) and fast-align (Dyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Some works ∗The first two authors contribute equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' †Corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Figure 1: Cosine similarities between subword repre- sentations in a parallel sentence pair from 8th layer of mBERT (left) and 6th layer of LaBSE (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Red boxes denote the gold alignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (Garg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Zenkel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Zhang and van Gen- abith, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021a) induce alignments from NMT model or its variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' However, these bilingual models only support the language pair involved in the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' They also treat the source and target side differently, thus two models are required for bidirectional alignment extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Another line of works (Jalili Sabet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Dou and Neubig, 2021) build multilingual word aligners with contextualized embeddings from the multilin- gual pretrained language model (Wu and Dredze, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Conneau et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020, mPLM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Thanks to the language-agnostic representations learned with multilingual masked language modeling task, these methods are capable of inducing word alignments even for language pairs without any parallel corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Different from previous methods, in this pa- per we present AccAlign, a more accurate mul- tilingual word aligner with the multilingual sen- tence Transformer LaBSE (Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2022, see Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The LaBSE is trained on large scale parallel corpus of various language pairs to learn language-agnostic sentence embeddings with con- trastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' However, it is unclear whether LaBSE has learned language-agnostic word-level arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='12140v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='CL] 28 Jan 2023 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='81 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='68 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='31 Das 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='29 、 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='64 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='48 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='53 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 e 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='41 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='29 verstehen 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='50 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='77 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='54 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='89 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='56 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='57 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='94 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='61 Jno That can That our can nderstand understand mBERT LaBSEembeddings, which is the key for the success of word alignment extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Specifically, we first direct induce word alignments from LaBSE and demonstrate that LaBSE outperforms other mPLMs currently used in the alignment task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This indi- cates that LaBSE has implicitly learned language- agnostic word-level embeddings at some intermedi- ate layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Then we propose a simple and effective finetuning method to further improve performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Empirical results on seven language pairs show that our best aligner outperforms previous SOTA mod- els of all varieties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In addition, our aligner supports different language pairs in a single model, and even achieves new SOTA on zero-shot language pairs that does not appear in finetuning process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 2 AccAlign 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Background: LaBSE LaBSE (Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2022) is the state-of-the-art model for the cross-lingual sentence retrieval task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Given an input sentence, the model can retrieve the most similar sentence from candidates in a different language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' LaBSE is first pretrained on a combina- tion of masked language modeling (Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019) and translation language modeling (Conneau and Lample, 2019) tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' After that, it is effec- tively finetuned with contrastive loss on 6B parallel sentences across 109 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We leave the train- ing detail of LaBSE in the appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' However, as LaBSE does not include any word-level training loss when finetuning with contrastive loss, it is un- clear whether the model has learned high-quality language-agnostic word-level embeddings, which is the key for a multilingual word aligner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 Alignment Induction from LaBSE To investigate whether LaBSE is a strong multilin- gual word aligner, we first induce word alignments from vanilla LaBSE without any modification or finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This is done by utilizing the contextual embeddings from LaBSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Specifically, consider a bilingual sentence pair x = ⟨x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', xn⟩ and y = ⟨y1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', ym⟩, we denote the contextual em- beddings from LaBSE as hx = ⟨hx1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', hxn⟩ and hy = ⟨hy1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', hym⟩, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Following pre- vious work (Dou and Neubig, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jalili Sabet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020), we get the similarity matrix from the contextual embeddings: S = hxhT y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (1) 1Code is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='com/sufenlp/ AccAlign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Figure 2: The framework of adapter-based finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The blue blocks are kept frozen, while the red adapter blocks are updated during finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The similarity matrix is normalized for each row to get Sxy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Sxy is treated as the probability matrix as its i-th row represents the probabilities of aligning xi to all tokens in y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The reverse probability ma- trix Syx is computed similarly by normalizing each column of S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Taking intersection of the two prob- ability matrices yields the final alignment matrix: A = (Sxy > c) ∗ (ST yx > c), (2) where c is a threshold and Aij = 1 indicates that xi and yj are aligned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The above method induces alignments on the subword level, which are con- verted into word-level alignments by aligning two words if any of their subwords are aligned follow- ing (Zenkel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jalili Sabet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 Finetuning LaBSE for Better Alignments Inspired by (Dou and Neubig, 2021), we propose a finetuning method to further improve performance given parallel corpus with alignment labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Adapter-based Finetuning Adapter-based fine- tuning (Houlsby et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Bapna and Firat, 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021) is not only parameter-efficient, but also benefits model performance, especially for low-resource and cross-lingual tasks (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Figure 2 illustrates our overall framework, where the adapters are adopted from (Houlsby et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' For each layer of LaBSE, we introduce an adapter for each sublayer, which maps the in- put vector of dimension d to dimension m where m < d, and then re-maps it back to dimension d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Let h and h′ denote the input and output vector, Add & Norm Adapter Feed-forward 00000 Add & Norm 000 Adapter 00000 Feed-forward Self-attention Adapter XL AccAlignerModel Setting de-en sv-en fr-en ro-en ja-en zh-en fa-en avg Bilingual Statistical Methods fast-align (Dyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2013) scratch 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 eflomal (Östling and Tiedemann, 2016) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 GIZA++ (Och and Ney, 2003) 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Bilingual Neural Methods MTL-FULLC-GZ (Garg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019) scratch 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 BAO-GUIDE (Zenkel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020) 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 SHIFT-AET (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020b) 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 MASK-ALIGN (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021a) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 BTBA-FCBO-SST (Zhang and van Genabith, 2021) 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 Multilingual Neural Methods SimAlign (Jalili Sabet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020) no ft 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 AwesomeAlign (Dou and Neubig, 2021) no ft 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 self-sup ft 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 sup ft 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 AccAlign no ft 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 self-sup ft 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 sup ft 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Table 1: AER comparison between AccAlign and the baselines on test set of 7 language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' self-sup and sup mean finetuning the model with parallel corpus of self-supervised and human-annotated alignment labels, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' All multilingual methods are tested on zero-shot language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The output vector h′ is calculated as: h ′ = Wup · tanh(Wdown · h) + h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (3) Note that a skip-connection is employed to approx- imate an identity function if parameters of the pro- jection matrices are near zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' During finetuning, only parameters of the adapters are updated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Training Objective Let ˆA denote the alignment labels for the given sentence pair x and y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We define the learning objective as: L = � ij ˆAij 1 2 � (Sxy)ij n + (ST yx)ij m � , (4) where Sxy and Syx are the alignment probabil- ity matrices, n and m are the length of sentence x and y, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Intuitively, this objective encourages the gold aligned words to have closer contextualized representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In addition, as both Sxy and ST yx are encouraged to be close to ˆA, it im- plicitly encourages the two alignment probability matrices to be symmetrical to each other as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Our framework can be easily extended to cases where alignment labels are unavailable, by replac- ing ˆA with pseudo labels A (Equation 2) and train- ing in a self-supervised manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 3 Experiments 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Setup As we aim at building an accurate multilingual word aligner, we evaluate AccAlign on a di- verse alignment test set of seven language pairs: de/sv/ro/fr/ja/zh/fa-en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' For finetuning LaBSE, we use nl/cs/hi/tr/es/pt-en as the training set and cs-en as the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' To reduce the alignment anno- tation efforts and the finetuning cost, our training set only contains 3, 362 annotated sentence pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' To simulate the most difficult use cases where the test language pair may not included in training, we set the test language pairs different from training and validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Namely, LaBSE is tested in a zero- shot manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We denote this dataset as ALIGN6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We induce alignments from 6-th layer of LaBSE, which is selected on the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We use Alignment Error Rate (AER) as the evaluation met- ric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Our model is not directly comparable to the bilingual baselines, as they build model for each test language pair using large scale parallel corpus of that language pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In contrast, our method is more efficient as it supports all language pairs in a single model and our finetuning only requires 3, 362 sentence pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Appendix B show more dataset, model, baselines and other setup details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 Main Results Table 1 shows the comparison of our methods against baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' AccAlign-supft achieves new SOTA on word alignment induction, outperforming all baselines in 6 out of 7 language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' AccAlign is also simpler than AwesomeAlign, which is the best existing multilingual word aligner, as Awe- someAlign finetunes with a combination of five objectives, while AccAlign only has one objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The vanilla LaBSE is a strong multilingual word Model fi-el fi-he SimAglin noft 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 AwesomeAlign noft 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 self-sup ft 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 sup ft 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 AccAlign noft 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 self-sup ft 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 sup ft 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 Table 2: AER comparison between AccAlign and mul- tilingual baselines on non-English zero-shot language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The best AER for each column is bold and un- derlined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' aligner (see AccAlign-noft).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' It performs better than SimAlign-noft and AwesomeAlign-noft, and com- parable with AwesomeAlign-supft, indicating that LaBSE has learned high-quality language-agnostic word embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Our finetuning method is ef- fective as well, improving AccAlign-noft by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 AER with self-supervised and supervised alignment labels, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Our model improves multilingual baselines even more significantly on non-English language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' See Table 2 of ap- pendix for detailed results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 Analysis Performance on non-English Language Pair We conduct experiments to evaluate AccAlign against multilingual baselines on non-English test language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The fi-el (Finnish-Greek) and fi-he (Finnish-Hebrew) test set contains 791 and 2,230 annotated sentence pairs, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Both test sets are from ImaniGooghari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2021)2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The results are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' As can be seen, Ac- cAlign in all three settings significantly improves all multilingual baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The improvements is much larger compared with zero-shot English lan- guage pairs, demonstrating the effectiveness of Ac- cAlign on non-English language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We also observe that finetuning better improves AccAlign than AwesomeAlign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This verifies the strong cross- lingual transfer ability of LaBSE , even between English-centric and non-English language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Adapter-based vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Full Finetuning We com- pare full and adapter-based fine-tuning in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Compared with full finetuning, adapter-based fine- tuning updates much less parameters and obtains better performance under both supervised and self- supervised settings, demonstrating its efficiency and effectiveness for zero-shot word alignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='com/cisnlp/graph-align Ft type full adapter Ft mode self-supervised (avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=') 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 supervised (avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=') 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Number of ft param.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 428M 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4M Table 3: AER comparison of full finetuning and adapter-based finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Bilingual Finetuning To better understand our method, we compare with AwesomeAlign under bilingual finetuning setup where the model is fine- tuned and tested in the same single language pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We follow the setup in (Dou and Neubig, 2021) and use finetuning corpus without human-annotated la- bels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' As shown in Table 4, LaBSE outperforms AwesomeAlign in the finetuning language pair (18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The performance gap becomes larger for zero-shot language pairs (21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The results demonstrate that AccAlign is an effec- tive zero-shot aligner, as LaBSE has learned more language-agnostic representations which benefit cross-lingual transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Different Multilingual Pretrained Models We investigate the performance of AccAlign-noft when replacing LaBSE with other mPLMs, including XLM-R, mBERT and four other multilingual sen- tence Transformer from HuggingFace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' LaBSE out- performs other mPLMs by 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 to 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 averaged AER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Table 9 in appendix shows more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Performance across Layer We investigate the performance of AccAlign-noft when extracts align- ments from different layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Layer 6, which is the layer we use for all experiments, outperforms other layers by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 to 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 averaged AER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Please refer to Table 10 in appendix for more details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Representation Analysis To succeed in multi- lingual word alignment, the contextual embed- dings should prefer two following properties: (1) language-agnostic: two aligned bilingual words should be mapped to nearby features in the same language-agnostic feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2) word- identifiable: the embeddings of two random tokens from the same sentence should be distinguishable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Therefore, we analyze the embeddings from dif- ferent layers of AccAlign under different settings by computing cosine similarity for aligned word pairs and word pairs randomly sampled from the same sentence, denoted as sbi and smono (see ap- pendix for more experiment details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Intuitively, bigger sbi and smaller smono are preferred as we Model Test lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ft lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' de-en fr-en ro-en ja-en zh-en avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' AwesomeAlign ft lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 zero-shot langs (avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=') 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 AccAlign ft lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 zero-shot langs (avg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=') 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 Table 4: AER results with bilingual finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Figure 3: sbi (↑) and smono (↓) of AccAlign without finetuning (noft), with self-supervised finetuning (self- sup ft) and supervised finetuning (sup ft).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' expect the features of aligned words to be similar while that of two different words to be different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The results on de-en test set are presented in Fig- ure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' For vanilla LaBSE (green curves), we find that features from 6-th layer, namely the best layer to induce alignment, successfully trades off these two properties as it obtains the biggest sbi − smono among all layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In addition, adapter-based fine- tuning improves performance mainly by making features more word-identifiable, as it significantly decreases smono while almost maintaining sbi .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 4 Conclusion In this paper, we introduce AccAlign, a novel multi- lingual word aligner based on multilingual sentence Transformer LaBSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The best proposed approach finetunes LaBSE on a few thousands of annotated parallel sentences and achieves state-of-the-art per- formance even for zero-shot language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ac- cAlign is believed to be a valuable alignment tool that can be used out-of-the-box for other NLP tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Limitations AccAlign has shown to extract high quality word alignments when the input texts are two well-paired bilingual sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' However, the condition is not always met.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In lexically constrained decod- ing of NMT (Hasler et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021b), the aligner takes a full source- language sentence and a partial target-language translation as the input at each step to determine the right position to incorporate constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In cre- ating translated training corpus in zero-resource language for sequence tagging or parsing (Ni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jain et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Fei et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020), the aligner extracts alignments from the labelled sentence and its translation to conduct label projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Both cases deviate from our current settings as the input sentence may contain translation error or even be incomplete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We leave exploring the robustness of AccAlign as the future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' At the same time, our proposed method only supports languages included in LaBSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This hin- ders applying AccAlign to more low-resource lan- guages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Future explorations are needed to rapidly adapt AccAlign to new languages (Neubig and Hu, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Garcia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Acknowledgements This project was supported by National Natural Science Foundation of China (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 62106138) and Shanghai Sailing Program (No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 21YF1412100).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We thank the anonymous reviewers for their in- sightful feedbacks on this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' References Niraj Aswani and Robert Gaizauskas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Aligning words in english-hindi parallel corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceed- ings of the ACL Workshop on Building and Using Parallel Texts, pages 115–118.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ankur Bapna and Orhan Firat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Simple, scal- able adaptation for neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1538– 1548, Hong Kong, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 score 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 Sbi of noft Smono of noft Sbi of self-sup ft 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 Smono of self-sup ft Sbi of sup ft Smono of sup ft 10 12 0 2 4 6 8 layerPeter F Brown, Stephen A Della Pietra, Vincent J Della Pietra, and Robert L Mercer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The math- ematics of statistical machine translation: Parameter estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Computational linguistics, 19(2):263– 311.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Mehmet Talha Cakmak, Süleyman Acar, and Gül¸sen Eryi˘git.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Word alignment for english-turkish language pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the Eighth Interna- tional Conference on Language Resources and Eval- uation (LREC’12), pages 2177–2180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Chi Chen, Maosong Sun, and Yang Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Mask- align: Self-supervised neural word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 4781–4791, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Guanhua Chen, Yun Chen, and Victor OK Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Lexically constrained neural machine translation with explicit alignment guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 35, pages 12630–12638.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' A simple framework for contrastive learning of visual representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In International conference on machine learning, pages 1597–1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, and Qun Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Accurate word alignment induc- tion from neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 566–576, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Unsupervised cross-lingual representation learning at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440– 8451, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Alexis Conneau and Guillaume Lample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Cross- lingual language model pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Advances in neural information processing systems, 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' BERT: Pre-training of deep bidirectional transformers for language under- standing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Associ- ation for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Zi-Yi Dou and Graham Neubig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Word alignment by fine-tuning embeddings on parallel corpora.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 2112–2128, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Chris Dyer, Victor Chahuneau, and Noah A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' A simple, fast, and effective reparameter- ization of IBM model 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644–648, At- lanta, Georgia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Beyond english-centric mul- tilingual machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 22(107):1–48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Hao Fei, Meishan Zhang, and Donghong Ji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Cross-lingual semantic role labeling with high- quality translated training corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7014–7026, On- line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Language- agnostic BERT sentence embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 878–891, Dublin, Ireland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Xavier Garcia, Noah Constant, Ankur Parikh, and Orhan Firat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Towards continual learning for multilingual machine translation via vocabulary sub- stitution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1184–1192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jointly learning to align and translate with transformer models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4453–4462, Hong Kong, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Joao Graca, Joana Paulo Pardal, Luísa Coheur, and Dia- mantino Caseiro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Building a golden collection of parallel multi-language word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Pro- ceedings of the Sixth International Conference on Language Resources and Evaluation (LREC’08).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Eva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Neural machine translation decod- ing with terminology constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Pa- pers), pages 506–512.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jiawei Low, Lidong Bing, and Luo Si.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' On the effectiveness of adapter- based tuning for pretrained language model adap- tation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 2208–2222, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Maria Holmqvist and Lars Ahrenberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' A gold standard for english-swedish word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 18th Nordic conference of compu- tational linguistics (NODALIDA 2011), pages 106– 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Parameter-efficient transfer learning for nlp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In International Conference on Machine Learning, pages 2790–2799.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ayyoob ImaniGooghari, Masoud Jalili Sabet, Lutfi Kerem Senel, Philipp Dufter, François Yvon, and Hinrich Schütze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Graph algorithms for multiparallel word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8457–8469, Online and Punta Cana, Dominican Republic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Alankar Jain, Bhargavi Paranjape, and Zachary C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Lip- ton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Entity projection via machine transla- tion for cross-lingual NER.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1083–1092, Hong Kong, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Masoud Jalili Sabet, Philipp Dufter, François Yvon, and Hinrich Schütze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' SimAlign: High qual- ity word alignments without parallel training data us- ing static and contextualized embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Find- ings of the Association for Computational Linguis- tics: EMNLP 2020, pages 1627–1643, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' As- sociation for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Murathan Kurfalı and Robert Östling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Noisy par- allel corpus filtering through projected word embed- dings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 277–281, Florence, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Grandee Lee, Xianghu Yue, and Haizhou Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Linguistically motivated parallel data augmentation for code-switch language modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In INTER- SPEECH, pages 3730–3734.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' On the word alignment from neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1293–1303, Florence, Italy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Pre- training multilingual neural machine translation by leveraging alignment information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2649–2663.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Yang Liu and Maosong Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Contrastive unsu- pervised word alignment with non-local features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Twenty-Ninth AAAI Conference on Artificial Intelli- gence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Lieve Macken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' An annotation scheme and gold standard for dutch-english word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In 7th conference on International Language Resources and Evaluation (LREC 2010), pages 3369–3374.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Eu- ropean Language Resources Association (ELRA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' David Mareˇcek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Automatic alignment of tec- togrammatical trees from czech-english parallel cor- pus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Rada Mihalcea and Ted Pedersen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' An evalua- tion exercise for word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Transla- tion and Beyond, pages 1–10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Graham Neubig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The Kyoto free translation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='phontron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='com/kftt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Graham Neubig and Junjie Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Rapid adapta- tion of neural machine translation to new languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 875–880, Brussels, Belgium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Com- putational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jian Ni, Georgiana Dinu, and Radu Florian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Weakly supervised cross-lingual named entity recog- nition via effective annotation and representation projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1470–1480, Van- couver, Canada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Lin- guistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Franz Josef Och and Hermann Ney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' A systematic comparison of various statistical alignment models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Computational Linguistics, 29(1):19–51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Robert Östling and Jörg Tiedemann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Efficient word alignment with markov chain monte carlo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The Prague Bulletin of Mathematical Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Nils Reimers and Iryna Gurevych.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Sentence- bert: Sentence embeddings using siamese bert- networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Haoyue Shi, Luke Zettlemoyer, and Sida I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Bilingual lexicon induction via unsupervised bitext construction and word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 813–826, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, and Min Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Alignment-enhanced trans- former for constraining nmt with pre-specified trans- lations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8886–8893.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Leila Tavakoli and Heshaam Faili.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Phrase align- ments in parallel corpus using bootstrapping ap- proach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' David Vilar, Maja Popovi´c, and Hermann Ney.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Aer: Do we need to “improve” our alignments?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the Third International Workshop on Spoken Language Translation: Papers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Shijie Wu and Mark Dredze.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Beto, bentz, becas: The surprising cross-lingual effectiveness of bert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of EMNLP-IJCNLP, pages 833–844.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Multilingual universal sentence encoder for semantic retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 87–94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Thomas Zenkel, Joern Wuebker, and John DeNero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Adding interpretable attention to neural trans- lation models improves word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' arXiv preprint arXiv:1901.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='11359.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Thomas Zenkel, Joern Wuebker, and John DeNero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' End-to-end neural word alignment outper- forms GIZA++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 1605–1617, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Jingyi Zhang and Josef van Genabith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' A bidi- rectional transformer based alignment model for un- supervised word alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 283–292, Online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' As- sociation for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' A LaBSE LaBSE (Feng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2022) is the state-of-the-art model for the cross-lingual sentence retrieval task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Given an input sentence, the model can retrieve the most similar sentence from candidates in a differ- ent language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' It has 471M parameters and supports 109 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The model is first pretrained on a combination of masked language modeling (De- vlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019) and translation language model- ing (Conneau and Lample, 2019) tasks on the 17B monolingual data and 6B bilingual translation pairs, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' After that, it is effectively finetuned with contrastive loss on 6B bilingual translation pairs across 109 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Specifically, given a bilingual sentence pair ⟨xi, yi⟩, we use exi and eyi to denote their sen- tence embeddings from LaBSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Then the model is finetuned using contrative loss with in-batch nega- tives (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020a): ℓ = − 1 N N � i=1 � log exp � φ(exi, eyi) � �N j=1 exp � φ(exi, eyj) �+ log exp � φ(exi, eyi) � �N j=1 exp � φ(exj, eyi) � � , (5) where φ(exi, eyj) measures the similarity of sen- tence xi and yj in the embedding space: φ � exi, eyj � = � e⊤ xieyj − b if i = j e⊤ xieyj if i ̸= j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (6) Note that a margin b is introduced to improve the separation between positive and negative pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B Experiments Setup B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Language Code We refer to the language information in Table 1 of (Fan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The information of the languages used in this paper is listed in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 Dataset Table 6 shows the detailed data statistics of ALIGN6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The ja and zh sentences are preprocessed by Dou and Neubig (2021) and Liu and Sun (2015), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' For finetuning AccAlign and multilin- gual baselines, we use the training and validation set from ALIGN6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' As bilingual baselines are not capable of zero-shot alignment induction,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' they are ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='trained from scratch with parallel corpus of the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='test language pair using the same dataset as Dou ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ISO ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Name ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Family ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='en ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='English ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Germanic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='nl ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Dutch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Germanic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Czech ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Slavic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='hi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Hindi ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Indo-Aryan ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='tr ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Turkish ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Turkic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='es ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Spanish ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Romance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='pt ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Portuguese ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Romance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='de ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='German ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Germanic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='sv ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Swedish ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Germanic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='fr ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='French ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Romance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ro ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Romanian ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Romance ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ja ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Japanese ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Japonic ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='zh ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Chinese ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Chinese ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='fa ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Persian ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Iranian ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='Table 5: The information of the languages used in this ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' and Neubig (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The bilingual training data set of de/fr/ro/ja/zh-en contain 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9M, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1M, 450K, 444K and 40K parallel sentence pairs, respectively, which are much larger than the training dataset of ALIGN6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 Model Setup We use the contextual word embeddings from the 6-th layer of the official LaBSE3, which have 768 dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We set the threshold in Equation 2 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1, which is selected on validation set by manual tuning among [0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' For adapter-based finetun- ing, we set the hidden dimension of the adapters to be 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The adapters have 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4M parameters, which account 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5% of the parameters of LaBSE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We use the AdamW optimizer with learning rate of 1e-4, and do not use warmup or dropout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The batch size is set to 40 and maximum updates number is 1500 steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We use a single NVIDIA V100 GPU for all experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 Baselines Besides three statistical baselines fast-align (Dyer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2013), eflomal (Östling and Tiedemann, 2016) and GIZA++ (Och and Ney, 2003), we com- pare AccAlign with the following neural baselines: MTL-FULLC-GZ (Garg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model supervises an attention head in Transformer-based NMT model with GIZA++ word alignments in a multitask learning framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' BAO-GUIDE (Zenkel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model 3https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='co/sentence-transformers/LaBSE Type Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Source Link # Sents Training set cs-en Mareˇcek (2011) http://ufal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='mff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cuni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cz/ czech-english-manual-word-alignment 2400 nl-en Macken (2010) http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='tst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='inl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='nl 372 hi-en Aswani and Gaizauskas (2005) http://web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='eecs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='umich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu/~mihalcea/wpt05/ 90 tr-en Cakmak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2012) http://web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='itu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='tr/gulsenc/resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='htm 300 es-en Graca et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2008) https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='hlt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='inesc-id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='pt/w/Word_Alignments 100 pt-en Graca et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2008) https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='hlt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='inesc-id.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='pt/w/Word_Alignments 100 Validation set cs-en Mareˇcek (2011) http://ufal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='mff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cuni.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cz/ czech-english-manual-word-alignment 101 Test set de-en Vilar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2006) http://www-i6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='informatik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='rwth-aachen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='de/ goldAlignment/ 508 sv-en Holmqvist and Ahrenberg (2011) https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ida.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='se/divisions/hcs/nlplab/ resources/ges/ 192 fr-en Mihalcea and Pedersen (2003) http://web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='eecs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='umich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu/~mihalcea/wpt/ 447 ro-en Mihalcea and Pedersen (2003) http://web.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='eecs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='umich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu/~mihalcea/wpt05/ 248 ja-en Neubig (2011) http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='phontron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='com/kftt 582 zh-en Liu and Sun (2015) https://nlp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='csai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='tsinghua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='cn/~ly/systems/ TsinghuaAligner/TsinghuaAligner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='html 450 fa-en Tavakoli and Faili (2014) http://eceold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='ir/en/node/940 400 Table 6: Training, validation and test dataset of ALIGN6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Note that this is a zero-shot setting as the test language pairs do not appear in training and validation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' adds an extra alignment layer to repredict the to-be- aligned target token and further improves perfor- mance with Bidirectional Attention Optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' SHIFT-AET (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model trains a separate alignment module in a self- supervised manner, and induce alignments when the to-be-aligned target token is the decoder input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' MASK-ALIGN (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2021a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model is a self-supervised word aligner which makes use of the full context on the target side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' BTBA-FCBO-SST (Zhang and van Genabith, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model has similar idea with Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' (2021a), but with different model architecture and training objectives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' SimAlign (Jalili Sabet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model is a multilingual word aligner which induces alignment with contextual word embeddings from mBERT and XLM-R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' AwesomeAlign (Dou and Neubig, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' This model improves over SimAlign by designing new alignment induction method and proposing to fur- ther finetune the mPLM on parallel corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Among them, SimAlign and AwesomeAlign are multilingual aligners which support multiple lan- guage pairs in a single model, while others are bilingual word aligners which require training from scratch with bilingual corpus for each test lan- guage pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We re-implement SimAlign and Awe- someAlign, while quote the results from (Dou and Neubig, 2021) for the three statistical baselines and the corresponding paper for other baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 Sentence Transformer We compare LaBSE with four other multilingual sentence Transformer in HuggingFace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The de- tailed information of these models are: distiluse-base-multilingual-cased-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 This model is a multilingual knowledge distilled version of m-USE (Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', 2020), which has 135M parameters and supports more than 50+ languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' paraphrase-xlm-r-multilingual-v1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 This model is a multilingual version of paraphrase-distilroberta- base-v1 (Reimers and Gurevych, 2019), which has 278M parameters and supports 50+ languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' It initializes the student model with an mPLM and trains it to imitate monolingual sentence Trans- former on parallel data with knowledge distillation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' paraphrase-multilingual-MiniLM-L12-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 This model is a multilingual version of paraphrase- MiniLM-L12-v2 (Reimers and Gurevych, 2019), which has 118M parameters and supports 50+ languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' It trains similarly as paraphrase-xlm- r-multilingual-v1, but with different teacher and student model initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' paraphrase-multilingual-mpnet-base-v2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 This model is a multilingual version of paraphrase- mpnet-base-v2 (Reimers and Gurevych, 2019), 4https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='co/sentence-transformers/distiluse- base-multilingual-cased-v2 5https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='co/sentence- transformers/paraphrase-xlm-r-multilingual-v1 6https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='co/sentence- transformers/paraphrase-multilingual-MiniLM-L12-v2 7https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='co/sentence- transformers/paraphrase-multilingual-mpnet-base-v2 which has 278M parameters and supports 50+ lan- guages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' It trains similarly as paraphrase-xlm-r- multilingual-v1, but with different teacher model initialization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 Bilingual Finetuning We use the same dataset as bilingual baselines for bilingual finetuning following (Dou and Neubig, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' At each time, we finetune LaBSE with one language pair among de/fr/ro/ja/zh-en and test on all seven language pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' For Awesome-align, we follow the setup in their paper, while for AccAlign, we use the same hyperparameters as the main ex- periments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 Representation Analysis We conduct representation analysis on de-en test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' To compute sbi, we calculate the averaged co- sine similarity of all gold aligned bilingual word pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' To compute smono, we randomly permute a given sentence x = ⟨x1, x2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', xn⟩ to get x′ = ⟨x′ 1, x′ 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=', x′ n⟩ and then create n word pairs as {⟨xi-x′ i⟩}n i=1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We go through all de and en test sentences and report the averaged cosine similarity of all created word pairs as smono.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' C Experiment Results Detailed results for each test language in Sec- tion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 are shown in Table 7 to Table 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Ft mode Ft type de-en sv-en fr-en ro-en ja-en zh-en fa-en avg Self-supervised full 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 adapter 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 Supervised full 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 adapter 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 Table 7: AER comparison of full finetuning and adapter-based finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The best AER for each column is bold and underlined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Model Ft lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Test lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' de-en fr-en ro-en ja-en zh-en sv-en fa-en AwesomeAlign de-en 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 fr-en 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 ro-en 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 ja-en 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 zh-en 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 AccAlign de-en 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 fr-en 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 ro-en 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 ja-en 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 zh-en 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 Table 8: AER results with bilingual finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The results where the model is trained and tested on the same language pair are bold and underlined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' layer de-en sv-en fr-en ro-en ja-en zh-en fas-en avg mBERT 8 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 XLM-R 8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 distiluse-base-multilingual-cased-v2 3 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 paraphrase-xlm-r-multilingual-v1 6 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 paraphrase-multilingual-MiniLM-L12-v2 6 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 paraphrase-multilingual-mpnet-base-v2 5 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 LaBSE 6 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 Table 9: AER comparison of LaBSE and other multilingual pretrained model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' All are without finetuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' We determine the best layer of alignment induction for each model using the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The best AER for each column is bold and underlined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Layer de-en sv-en fr-en ro-en ja-en zh-en fa-en avg 0 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 1 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 3 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 4 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 5 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 6 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 7 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 8 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 9 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='9 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 10 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 11 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='3 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 12 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='1 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='5 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='0 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='7 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='6 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='4 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content='8 Table 10: AER comparison of vanilla LaBSE across layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' Layer 0 is the embedding layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'} +page_content=' The best AER for each column is bold and underlined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/2NFLT4oBgHgl3EQfqi-E/content/2301.12140v1.pdf'}