Papers
Topics
Authors
Recent
Search
2000 character limit reached

Code Switching Language Model Using Monolingual Training Data

Published 23 Dec 2020 in cs.CL, cs.SD, and eess.AS | (2012.12543v2)

Abstract: Training a code-switching (CS) LLM using only monolingual data is still an ongoing research problem. In this paper, a CS LLM is trained using only monolingual training data. As recurrent neural network (RNN) models are best suited for predicting sequential data. In this work, an RNN LLM is trained using alternate batches from only monolingual English and Spanish data and the perplexity of the LLM is computed. From the results, it is concluded that using alternate batches of monolingual data in training reduced the perplexity of a CS LLM. The results were consistently improved using mean square error (MSE) in the output embeddings of RNN based LLM. By combining both methods, perplexity is reduced from 299.63 to 80.38. The proposed methods were comparable to the LLM fine tune with code-switch training data.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.