Papers
Topics
Authors
Recent
Search
2000 character limit reached

An Analysis and Mitigation of the Reversal Curse

Published 13 Nov 2023 in cs.CL, cs.AI, and cs.LG | (2311.07468v3)

Abstract: Recent research observed a noteworthy phenomenon in LLMs, referred to as the reversal curse.'' The reversal curse is that when dealing with two entities, denoted as $a$ and $b$, connected by their relation $R$ and its inverse $R^{-1}$, LLMs excel in handling sequences in the form of$aRb$,'' but encounter challenges when processing $bR^{-1}a$,'' whether in generation or comprehension. For instance, GPT-4 can accurately respond to the queryTom Cruise's mother is?'' with Mary Lee Pfeiffer,'' but it struggles to provide a satisfactory answer when askedMary Lee Pfeiffer's son is?'' In this paper, we undertake the first-ever study of how the reversal curse happens in LLMs. Our investigations reveal that the reversal curse can stem from the specific training objectives, which become particularly evident in the widespread use of next-token prediction within most causal LLMs. We hope this initial investigation can draw more attention to the reversal curse, as well as other underlying limitations in current LLMs.

Citations (19)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.