Papers
Topics
Authors
Recent
Search
2000 character limit reached

Sequence-level Large Language Model Training with Contrastive Preference Optimization

Published 23 Feb 2025 in cs.CL, cs.AI, and cs.LG | (2502.16433v1)

Abstract: The next token prediction loss is the dominant self-supervised training objective for LLMs and has achieved promising results in a variety of downstream tasks. However, upon closer investigation of this objective, we find that it lacks an understanding of sequence-level signals, leading to a mismatch between training and inference processes. To bridge this gap, we introduce a contrastive preference optimization (CPO) procedure that can inject sequence-level information into the LLM at any training stage without expensive human labeled data. Our experiments show that the proposed objective surpasses the next token prediction in terms of win rate in the instruction-following and text generation tasks.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.