Papers
Topics
Authors
Recent
Search
2000 character limit reached

Efficient and Interpretable Neural Models for Entity Tracking

Published 30 Aug 2022 in cs.CL | (2208.14252v1)

Abstract: What would it take for a natural LLM to understand a novel, such as The Lord of the Rings? Among other things, such a model must be able to: (a) identify and record new characters (entities) and their attributes as they are introduced in the text, and (b) identify subsequent references to the characters previously introduced and update their attributes. This problem of entity tracking is essential for language understanding, and thus, useful for a wide array of downstream applications in NLP such as question-answering, summarization. In this thesis, we focus on two key problems in relation to facilitating the use of entity tracking models: (i) scaling entity tracking models to long documents, such as a novel, and (ii) integrating entity tracking into LLMs. Applying language technologies to long documents has garnered interest recently, but computational constraints are a significant bottleneck in scaling up current methods. In this thesis, we argue that computationally efficient entity tracking models can be developed by representing entities with rich, fixed-dimensional vector representations derived from pretrained LLMs, and by exploiting the ephemeral nature of entities. We also argue for the integration of entity tracking into LLMs as it will allow for: (i) wider application given the current ubiquitous use of pretrained LLMs in NLP applications, and (ii) easier adoption since it is much easier to swap in a new pretrained LLM than to integrate a separate standalone entity tracking model.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.