Papers
Topics
Authors
Recent
Search
2000 character limit reached

Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language Modeling

Published 24 Oct 2018 in cs.CL | (1810.10254v2)

Abstract: Building large-scale datasets for training code-switching LLMs is challenging and very expensive. To alleviate this problem using parallel corpus has been a major workaround. However, existing solutions use linguistic constraints which may not capture the real data distribution. In this work, we propose a novel method for learning how to generate code-switching sentences from parallel corpora. Our model uses a Seq2Seq model in combination with pointer networks to align and choose words from the monolingual sentences and form a grammatical code-switching sentence. In our experiment, we show that by training a LLM using the augmented sentences we improve the perplexity score by 10% compared to the LSTM baseline.

Citations (19)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.