2000 character limit reached
Code-switched Language Models Using Dual RNNs and Same-Source Pretraining
Published 6 Sep 2018 in cs.CL and cs.LG | (1809.01962v1)
Abstract: This work focuses on building LMs for code-switched text. We propose two techniques that significantly improve these LMs: 1) A novel recurrent neural network unit with dual components that focus on each language in the code-switched text separately 2) Pretraining the LM using synthetic text from a generative model estimated using the training data. We demonstrate the effectiveness of our proposed techniques by reporting perplexities on a Mandarin-English task and derive significant reductions in perplexity.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.