Robustifying Language Models with Test-Time Adaptation
Abstract: Large-scale LLMs achieved state-of-the-art performance over a number of language tasks. However, they fail on adversarial language examples, which are sentences optimized to fool the LLMs but with similar semantic meanings for humans. While prior work focuses on making the LLM robust at training time, retraining for robustness is often unrealistic for large-scale foundation models. Instead, we propose to make the LLMs robust at test time. By dynamically adapting the input sentence with predictions from masked words, we show that we can reverse many language adversarial attacks. Since our approach does not require any training, it works for novel tasks at test time and can adapt to novel adversarial corruptions. Visualizations and empirical results on two popular sentence classification datasets demonstrate that our method can repair adversarial language attacks over 65% o
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.