UFT: Unifying Fine-Tuning of SFT and RLHF/DPO/UNA through a Generalized Implicit Reward Function
Abstract: By pretraining on trillions of tokens, an LLM gains the capability of text generation. However, to enhance its utility and reduce potential harm, SFT and alignment are applied sequentially to the pretrained model. Due to the differing nature and objective functions of SFT and alignment, catastrophic forgetting has become a significant issue. To address this, we introduce Unified Fine-Tuning (UFT), which integrates SFT and alignment into a single training stage using the same objective and loss functions through an implicit reward function. Our experimental results demonstrate that UFT outperforms SFT on instruction-tuning data alone. Moreover, when combining instruction-tuning data with alignment data, UFT effectively prevents catastrophic forgetting across these two stages and shows a clear advantage over sequentially applying SFT and alignment. This is evident in the significant improvements observed in the \textbf{ifeval} task for instruction-following and the \textbf{truthful-qa} task for factuality. The proposed general fine-tuning framework UFT establishes an effective and efficient pretraining-UFT paradigm for LLM training.
- Back to basics: Revisiting reinforce style optimization for learning from human feedback in llms, 2024.
- AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card, 1, 2024.
- A general theoretical paradigm to understand learning from human preferences, 2023.
- Open llm leaderboard (2023-2024). https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard, 2023.
- Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022.
- Constitutional ai: Harmlessness from ai feedback, 2022.
- Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39:324, 1952.
- Kto: Model alignment as prospect theoretic optimization, 2024.
- Open llm leaderboard v2. https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard, 2024.
- Orpo: Monolithic preference optimization without reference model, 2024.
- Lora: Low-rank adaptation of large language models, 2021.
- Mistral 7b, 2023.
- sdpo: Don’t use your data all at once, 2024.
- Truthfulqa: Measuring how models mimic human falsehoods, 2022.
- Rlaif: Scaling reinforcement learning from human feedback with ai feedback, 2023.
- Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023.
- Gpt-4 technical report, 2024.
- Training language models to follow instructions with human feedback, 2022.
- Smaug: Fixing failure modes of preference optimisation with dpo-positive, 2024.
- Paft: A parallel training paradigm for effective llm fine-tuning, 2024.
- From r𝑟ritalic_r to q∗superscript𝑞q^{*}italic_q start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT: Your language model is secretly a q-function, 2024.
- Direct preference optimization: Your language model is secretly a reward model, 2023.
- Offline regularised reinforcement learning for large language models alignment, 2024.
- Proximal policy optimization algorithms, 2017.
- Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
- Una: Unifying alignments of rlhf/ppo, dpo and kto by a generalized implicit reward function, 2024.
- A comprehensive survey of llm alignment techniques: Rlhf, rlaif, ppo, dpo and more, 2024.
- Helpsteer2: Open-source dataset for training top-performing reward models, 2024.
- Some things are more cringe than others: Iterative preference optimization with the pairwise cringe loss, 2024.
- Self-rewarding language models, 2024.
- Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
- Instruction-following evaluation for large language models, 2023.
- Token-level direct preference optimization, 2024.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.