Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections
Abstract: Recent developments in LLMs have manifested significant advancements. To facilitate safeguards against malicious exploitation, a body of research has concentrated on aligning LLMs with human preferences and inhibiting their generation of inappropriate content. Unfortunately, such alignments are often vulnerable: fine-tuning with a minimal amount of harmful data can easily unalign the target LLM. While being effective, such fine-tuning-based unalignment approaches also have their own limitations: (1) non-stealthiness, after fine-tuning, safety audits or red-teaming can easily expose the potential weaknesses of the unaligned models, thereby precluding their release/use. (2) non-persistence, the unaligned LLMs can be easily repaired through re-alignment, i.e., fine-tuning again with aligned data points. In this work, we show that it is possible to conduct stealthy and persistent unalignment on LLMs via backdoor injections. We also provide a novel understanding on the relationship between the backdoor persistence and the activation pattern and further provide guidelines for potential trigger design. Through extensive experiments, we demonstrate that our proposed stealthy and persistent unalignment can successfully pass the safety evaluation while maintaining strong persistence against re-alignment defense.
- Anonymous. 2023. Open-source can be dangerous: On the vulnerability of value alignment in open-source LLMs. In Submitted to The Twelfth International Conference on Learning Representations. Under review.
- Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.
- Rishabh Bhardwaj and Soujanya Poria. 2023a. Language model unalignment: Parametric red-teaming to expose hidden harms and biases.
- Rishabh Bhardwaj and Soujanya Poria. 2023b. Red-teaming large language models using chain of utterances for safety-alignment. arXiv preprint arXiv:2308.09662.
- Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901.
- Badnl: Backdoor attacks against nlp models with semantic-preserving improvements. In Annual computer security applications conference, pages 554–569.
- A backdoor attack against lstm-based text classification systems. IEEE Access, 7:138872–138878.
- Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.
- Aligning language models with preferences through f-divergence minimization. arXiv preprint arXiv:2302.08215.
- Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733.
- Gradient-based adversarial attacks against text transformers.
- Julian Hazell. 2023. Large language models can be used to effectively scale spear phishing campaigns. arXiv preprint arXiv:2305.06972.
- Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
- Gwo-Jen Hwang and Ching-Yi Chang. 2023. A review of opportunities and challenges of chatbots in education. Interactive Learning Environments, 31(7):4099–4112.
- Exploiting programmatic behavior of llms: Dual-use through standard security attacks. arXiv preprint arXiv:2302.05733.
- Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
- Openassistant conversations–democratizing large language model alignment. arXiv preprint arXiv:2304.07327.
- Pretraining language models with human preferences. In International Conference on Machine Learning, pages 17506–17533. PMLR.
- Backdoor learning: A survey. IEEE Transactions on Neural Networks and Learning Systems.
- Ha-Thanh Nguyen. 2023. A brief report on lawgpt 1.0: A virtual legal assistant based on gpt-3. arXiv preprint arXiv:2302.05729.
- OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774.
- Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744.
- Fine-tuning aligned language models compromises safety, even when users do not intend to! arXiv preprint arXiv:2310.03693.
- Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
- On second thought, let’s not think step by step! bias and toxicity in zero-shot reasoning.
- Large language models in medicine. Nature medicine, pages 1–11.
- Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
- Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
- Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837.
- Jailbreak and guard aligned language models with only few in-context demonstrations. arXiv preprint arXiv:2310.06387.
- Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668.
- Bloomberggpt: A large language model for finance.
- Shadow alignment: The ease of subverting safely-aligned language models. arXiv preprint arXiv:2310.02949.
- Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.
- Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.