2000 character limit reached
XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Published 30 Jun 2021 in cs.CL | (2106.16138v2)
Abstract: In this paper, we introduce ELECTRA-style tasks to cross-lingual LLM pre-training. Specifically, we present two pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection. Besides, we pretrain the model, named as XLM-E, on both multilingual and parallel corpora. Our model outperforms the baseline models on various cross-lingual understanding tasks with much less computation cost. Moreover, analysis shows that XLM-E tends to obtain better cross-lingual transferability.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.