2000 character limit reached
Pruning as a Defense: Reducing Memorization in Large Language Models
Published 18 Feb 2025 in cs.LG, cs.AI, and cs.CL | (2502.15796v1)
Abstract: LLMs have been shown to memorize significant portions of their training data, which they can reproduce when appropriately prompted. This work investigates the impact of simple pruning techniques on this behavior. Our findings reveal that pruning effectively reduces the extent of memorization in LLMs, demonstrating its potential as a foundational approach for mitigating membership inference attacks.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.