Papers
Topics
Authors
Recent
Search
2000 character limit reached

The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models

Published 3 Jun 2021 in cs.CL, cs.AI, and cs.LG | (2106.01950v1)

Abstract: Mechanisms for encoding positional information are central for transformer-based LLMs. In this paper, we analyze the position embeddings of existing LLMs, finding strong evidence of translation invariance, both for the embeddings themselves and for their effect on self-attention. The degree of translation invariance increases during training and correlates positively with model performance. Our findings lead us to propose translation-invariant self-attention (TISA), which accounts for the relative position between tokens in an interpretable fashion without needing conventional position embeddings. Our proposal has several theoretical advantages over existing position-representation approaches. Experiments show that it improves on regular ALBERT on GLUE tasks, while only adding orders of magnitude less positional parameters.

Citations (21)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.