Unsupervised Improvement of Factual Knowledge in Language Models
Abstract: Masked language modeling (MLM) plays a key role in pretraining LLMs. But the MLM objective is often dominated by high-frequency words that are sub-optimal for learning factual knowledge. In this work, we propose an approach for influencing MLM pretraining in a way that can improve LLM performance on a variety of knowledge-intensive tasks. We force the LLM to prioritize informative words in a fully unsupervised way. Experiments demonstrate that the proposed approach can significantly improve the performance of pretrained LLMs on tasks such as factual recall, question answering, sentiment analysis, and natural language inference in a closed-book setting.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.