Papers
Topics
Authors
Recent
Search
2000 character limit reached

Improving LLM General Preference Alignment via Optimistic Online Mirror Descent

Published 24 Feb 2025 in cs.LG, cs.AI, and cs.CL | (2502.16852v1)

Abstract: Reinforcement learning from human feedback (RLHF) has demonstrated remarkable effectiveness in aligning LLMs with human preferences. Many existing alignment approaches rely on the Bradley-Terry (BT) model assumption, which assumes the existence of a ground-truth reward for each prompt-response pair. However, this assumption can be overly restrictive when modeling complex human preferences. In this paper, we drop the BT model assumption and study LLM alignment under general preferences, formulated as a two-player game. Drawing on theoretical insights from learning in games, we integrate optimistic online mirror descent into our alignment framework to approximate the Nash policy. Theoretically, we demonstrate that our approach achieves an $O(T{-1})$ bound on the duality gap, improving upon the previous $O(T{-1/2})$ result. More importantly, we implement our method and show through experiments that it outperforms state-of-the-art RLHF algorithms across multiple representative benchmarks.

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.