Knowledge Editing for Large Language Models: A Survey
Abstract: LLMs have recently transformed both the academic and industrial landscapes due to their remarkable capacity to understand, analyze, and generate texts based on their vast knowledge and reasoning ability. Nevertheless, one major drawback of LLMs is their substantial computational cost for pre-training due to their unprecedented amounts of parameters. The disadvantage is exacerbated when new knowledge frequently needs to be introduced into the pre-trained model. Therefore, it is imperative to develop effective and efficient techniques to update pre-trained LLMs. Traditional methods encode new knowledge in pre-trained LLMs through direct fine-tuning. However, naively re-training LLMs can be computationally intensive and risks degenerating valuable pre-trained knowledge irrelevant to the update in the model. Recently, Knowledge-based Model Editing (KME) has attracted increasing attention, which aims to precisely modify the LLMs to incorporate specific knowledge, without negatively influencing other irrelevant knowledge. In this survey, we aim to provide a comprehensive and in-depth overview of recent advances in the field of KME. We first introduce a general formulation of KME to encompass different KME strategies. Afterward, we provide an innovative taxonomy of KME techniques based on how the new knowledge is introduced into pre-trained LLMs, and investigate existing KME strategies while analyzing key insights, advantages, and limitations of methods from each category. Moreover, representative metrics, datasets, and applications of KME are introduced accordingly. Finally, we provide an in-depth analysis regarding the practicality and remaining challenges of KME and suggest promising research directions for further advancement in this field.
- Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 298–306.
- Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. In ACL.
- James A Anderson. 1972. A simple neural network generating an interactive memory. Mathematical biosciences 14, 3-4 (1972), 197–220.
- Vqa: Visual question answering. In ICCV.
- FRUIT: Faithfully Reflecting Updated Information in Text. arXiv:2112.08634 [cs.CL]
- Large language models and the perils of their hallucinations. Critical Care 27, 1 (2023), 1–2.
- Fine-tuning language models to find agreement among humans with diverse preferences. In NeurIPS.
- Rewriting a deep generative model. In ECCV.
- LEACE: Perfect linear concept erasure in closed form. arXiv preprint arXiv:2306.03819 (2023).
- SciBERT: A Pretrained Language Model for Scientific Text. In EMNLP-IJCNLP.
- Language models are few-shot learners. In NeurIPS, Vol. 33. 1877–1901.
- FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. In ACL.
- A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109 (2023).
- Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 (2021).
- Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting. In EMNLP.
- Microsoft coco captions: Data collection and evaluation server. arXiv 2015. arXiv preprint arXiv:1504.00325 (2015).
- Can We Edit Multimodal Large Language Models?. In EMNLP.
- Editing Language Model-based Knowledge Graph Embeddings. arXiv preprint arXiv:2301.10405 (2023).
- Cheng-Han Chiang and Hung-yi Lee. 2023. Can Large Language Models Be an Alternative to Human Evaluations? arXiv preprint arXiv:2305.01937 (2023).
- Alebachew Chiche and Betselot Yitagesu. 2022. Part of speech tagging: a systematic review of deep learning and machine learning approaches. Journal of Big Data 9, 1 (2022), 1–25.
- Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022).
- Knowledge Neurons in Pretrained Transformers. In ACL.
- Neural knowledge bank for pretrained transformers. In CCF International Conference on Natural Language Processing and Chinese Computing.
- Bias in bios: A case study of semantic representation bias in a high-stakes setting. In FAccT.
- Editing Factual Knowledge in Language Models. In EMNLP.
- Imagenet: A large-scale hierarchical image database. In CVPR. Ieee, 248–255.
- Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
- Unified language model pre-training for natural language understanding and generation. NeurIPS (2019).
- Calibrating Factual Knowledge in Pretrained Language Models. In EMNLP.
- A survey for in-context learning. arXiv preprint arXiv:2301.00234 (2022).
- Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387 (2023).
- Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics (2021).
- T-rex: A large scale alignment of natural language with knowledge base triples. In LREC.
- Recommender systems in the era of large language models (llms). arXiv preprint arXiv:2307.02046 (2023).
- Enriching contextualized language model from knowledge graph for biomedical information extraction. Briefings in Bioinformatics (2021).
- Model-agnostic meta-learning for fast adaptation of deep networks. In ICML.
- Boris A Galitsky. 2023. Truth-O-Meter: Collaborating with LLM in Fighting its Hallucinations. (2023).
- Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 (2022).
- Audio set: An ontology and human-labeled dataset for audio events. In ICASSP. IEEE, 776–780.
- Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space. In EMNLP.
- Transformer Feed-Forward Layers Are Key-Value Memories. In EMNLP. 5484–5495.
- Improving alignment of dialogue agents via targeted human judgements. arXiv preprint arXiv:2209.14375 (2022).
- Generative adversarial networks. Commun. ACM (2020).
- Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR.
- Editing Commonsense Knowledge in GPT. arXiv preprint arXiv:2305.14956 (2023).
- HyperNetworks. arXiv:1609.09106 [cs.LG]
- Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors. In NeurIPS Workshop on Robustness in Sequence Modeling.
- Methods for measuring, updating, and visualizing factual beliefs in language models. In EACL.
- Deep residual learning for image recognition. In CVPR. 770–778.
- Mark Heitmann. 2020. More than a feeling: Benchmarks for sentiment analysis accuracy. In More than a Feeling: Benchmarks for Sentiment Analysis Accuracy: Heitmann, Mark.
- Image captioning: Transforming objects into words. NeurIPS (2019).
- Measuring and manipulating knowledge representations in language models. arXiv preprint arXiv:2304.00740 (2023).
- Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735–1780.
- Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor. arXiv preprint arXiv:2212.09689 (2022).
- Meta-learning in neural networks: A survey. IEEE TPAMI (2021).
- LoRA: Low-Rank Adaptation of Large Language Models. arXiv:2106.09685 [cs.CL]
- A Survey of Knowledge Enhanced Pre-Trained Language Models. IEEE TKDE (2023).
- LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models. arXiv preprint arXiv:2304.01933 (2023).
- Transformer-Patcher: One Mistake Worth One Neuron. In ICLR.
- A survey of deep meta-learning. Artificial Intelligence Review (2021).
- Editing models with task arithmetic. In ICLR.
- How can we know what language models know? Transactions of the Association for Computational Linguistics (2020).
- AMMU: a survey of transformer-based biomedical pretrained language models. Journal of Biomedical Informatics 126 (2022), 103982.
- Atoosa Kasirzadeh and Iason Gabriel. 2023. In conversation with Artificial Intelligence: aligning language models with human values. Philosophy & Technology (2023).
- ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), 102274.
- Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL.
- Teuvo Kohonen. 1972. Correlation matrix memories. IEEE Trans. Comput. (1972).
- Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics (2019).
- Illustrating Reinforcement Learning from Human Feedback (RLHF). Hugging Face Blog (2022). https://huggingface.co/blog/rlhf.
- Plug-and-Play Adaptation for Continuously-updated QA. arXiv:2204.12785
- Zero-Shot Relation Extraction via Reading Comprehension. In CoNLL 2017.
- Large Language Models with Controllable Working Memory. In ACL.
- Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597 (2023).
- PMET: Precise Model Editing in a Transformer. arXiv preprint arXiv:2308.08742 (2023).
- Xiaonan Li and Xipeng Qiu. 2023. Finding supporting examples for in-context learning. arXiv preprint arXiv:2302.13539 (2023).
- Parameter-efficient sparsity for large language models fine-tuning. arXiv preprint arXiv:2205.11005 (2022).
- Q Vera Liao and Jennifer Wortman Vaughan. 2023. AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap. arXiv preprint arXiv:2306.01941 (2023).
- Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676 3 (2023).
- Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. In NeurIPS.
- Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
- An empirical study of catastrophic forgetting in large language models during continual fine-tuning. arXiv preprint arXiv:2308.08747 (2023).
- Untying the Reversal Curse via Bidirectional Language Model Editing. arXiv preprint arXiv:2310.10322 (2023).
- Yuxuan Ma. 2021. distilgpt2-finetuned-wikitext2. https://huggingface.co/MYX4567/distilgpt2-finetuned-wikitext2
- Memory-assisted prompt editing to improve GPT-3 after deployment. CoRR (2022).
- Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models. arXiv preprint arXiv:2303.08896 (2023).
- Locating and editing factual associations in GPT. In NeurIPS, Vol. 35. 17359–17372.
- Mass-Editing Memory in a Transformer. In The Eleventh International Conference on Learning Representations.
- Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147 (2022).
- Recent advances in natural language processing via large pre-trained language models: A survey. Comput. Surveys 56, 2 (2023), 1–40.
- Fast Model Editing at Scale. In ICLR.
- Memory-Based Model Editing at Scale. In ICML.
- Crosslingual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786 (2022).
- Fixing Model Bugs with Natural Language Patches. In EMNLP.
- Forgetting before Learning: Utilizing Parametric Arithmetic for Knowledge Updating in Large Language Models. arXiv preprint arXiv:2311.08011 (2023).
- Entity Cloze By Date: What LMs Know About Unseen Entities. In Findings of NAACL.
- Can LMs Learn New Entities from Descriptions? Challenges in Propagating Injected Knowledge. In ACL.
- OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
- Hariom A. Pandya and Brijesh S. Bhatt. 2021. Question Answering Survey: Directions, Challenges, Datasets, Evaluation Matrices. arXiv:2112.03572 [cs.CL]
- Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813 (2023).
- Instruction tuning with GPT-4. arXiv preprint arXiv:2304.03277 (2023).
- Red Teaming Language Models with Language Models. In EMNLP.
- KILT: a Benchmark for Knowledge Intensive Language Tasks. In ACL.
- Language Models as Knowledge Bases?. In EMNLP-IJCNLP.
- Yuval Pinter and Michael Elhadad. 2023. Emptying the Ocean with a Spoon: Should We Edit Models?. In EMNLP.
- Exploring Universal Intrinsic Task Subspace via Prompt Tuning. arXiv:2110.07867 [cs.CL]
- Improving language understanding by generative pre-training. (2018).
- Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research (2020).
- Sachin Ravi and Hugo Larochelle. 2016. Optimization as a model for few-shot learning. In International conference on learning representations.
- Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In EMNLP-IJCNLP.
- EHUD REITER and ROBERT DALE. 1997. Building applied natural language generation systems. Natural Language Engineering (1997).
- Marco Tulio Ribeiro and Scott Lundberg. 2022. Adaptive testing and debugging of NLP models. In ACL.
- Recipes for Building an Open-Domain Chatbot. In EACL.
- Editing a classifier by rewriting its prediction rules. In NeurIPS.
- LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs. In NeurIPS Workshop Datacentric AI.
- Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence. In NAACL.
- Fine-tuned language models are continual learners. In EMNLP.
- An exploratory study of COVID-19 misinformation on Twitter. Online social networks and media 22 (2021), 100104.
- AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts. In EMNLP.
- Editable Neural Networks. In ICLR.
- ConPET: Continual Parameter-Efficient Tuning for Large Language Models. arXiv preprint arXiv:2309.14763 (2023).
- Preference Ranking Optimization for Human Alignment. arXiv preprint arXiv:2306.17492 (2023).
- Felix Stahlberg. 2020. Neural machine translation: A review. Journal of Artificial Intelligence Research (2020).
- Selective annotation makes language models better few-shot learners. arXiv preprint arXiv:2209.01975 (2022).
- Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937 (2018).
- Repairing Neural Networks by Leaving the Right Past Behind. In NeurIPS.
- Stanford Alpaca: An Instruction-following LLaMA model. https://github.com/tatsu-lab/stanford_alpaca.
- Large language models in medicine. Nature medicine 29, 8 (2023), 1930–1940.
- FEVER: a Large-scale Dataset for Fact Extraction and VERification. In ACL.
- Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
- Joaquin Vanschoren. 2018. Meta-learning: A survey. arXiv preprint arXiv:1810.03548 (2018).
- Attention is all you need. NeurIPS 30 (2017).
- Continual learning with hypernetworks. arXiv:1906.00695 [cs.LG]
- Denny Vrandečić and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM (2014).
- Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926 (2023).
- EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models. arXiv preprint arXiv:2308.07269 (2023).
- K-adapter: Infusing knowledge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808 (2020).
- Self-Instruct: Aligning Language Model with Self Generated Instructions. arXiv preprint arXiv:2212.10560 (2022).
- Adamix: Mixture-of-adapter for parameter-efficient tuning of large language models. arXiv preprint arXiv:2205.12410 (2022).
- Aligning large language models with human: A survey. arXiv preprint arXiv:2307.12966 (2023).
- A survey on sentiment analysis methods, applications, and challenges. Artificial Intelligence Review (2022).
- Finetuned Language Models are Zero-Shot Learners.
- Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, Vol. 35. 24824–24837.
- Robust fine-tuning of zero-shot models. In CVPR.
- DEPN: Detecting and Editing Privacy Neurons in Pretrained Language Models. In EMNLP.
- Editing Large Language Models: Problems, Methods, and Opportunities. arXiv preprint arXiv:2305.13172 (2023).
- Unlearning bias in language models by partitioning gradients. In ACL Findings.
- BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models. In ACL.
- Michael Zhang and Eunsol Choi. 2021. SituatedQA: Incorporating Extra-Linguistic Contexts into QA. In EMNLP.
- A survey of large language models. arXiv preprint arXiv:2303.18223 (2023).
- Can We Edit Factual Knowledge by In-Context Learning? arXiv:2305.12740 [cs.CL]
- MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions. CoRR abs/2305.14795 (2023). arXiv:2305.14795
- Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In CHI. 1–20.
- Modifying Memories in Transformer Models. arXiv:2012.00363 [cs.CL]
- A comprehensive survey on transfer learning. Proc. IEEE (2020).
- Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019).
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.