Papers
Topics
Authors
Recent
Search
2000 character limit reached

Larger-Scale Transformers for Multilingual Masked Language Modeling

Published 2 May 2021 in cs.CL | (2105.00572v1)

Abstract: Recent work has demonstrated the effectiveness of cross-lingual LLM pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked LLMs, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.

Citations (112)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.