Papers
Topics
Authors
Recent
Search
2000 character limit reached

Hyperpolyglot LLMs: Cross-Lingual Interpretability in Token Embeddings

Published 29 Nov 2023 in cs.CL | (2311.18034v1)

Abstract: Cross-lingual transfer learning is an important property of multilingual LLMs. But how do LLMs represent relationships between languages? Every LLM has an input layer that maps tokens to vectors. This ubiquitous layer of LLMs is often overlooked. We find that similarities between these input embeddings are highly interpretable and that the geometry of these embeddings differs between model families. In one case (XLM-RoBERTa), embeddings encode language: tokens in different writing systems can be linearly separated with an average of 99.2% accuracy. Another family (mT5) represents cross-lingual semantic similarity: the 50 nearest neighbors for any token represent an average of 7.61 writing systems, and are frequently translations. This result is surprising given that there is no explicit parallel cross-lingual training corpora and no explicit incentive for translations in pre-training objectives. Our research opens the door for investigations in 1) The effect of pre-training and model architectures on representations of languages and 2) The applications of cross-lingual representations embedded in LLMs.

Citations (13)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.