Papers
Topics
Authors
Recent
Search
2000 character limit reached

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

Published 10 Sep 2019 in cs.CL and cs.LG | (1909.04761v2)

Abstract: Pretrained LLMs are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual LLM Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune LLMs efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code.

Citations (98)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.