Papers
Topics
Authors
Recent
Search
2000 character limit reached

Lossless Vocabulary Reduction for Auto-Regressive Language Models

Published 9 Oct 2025 in cs.CL, cs.AI, cs.LG, and stat.ML | (2510.08102v1)

Abstract: Tokenization -- the process of decomposing a given text into a sequence of subwords called tokens -- is one of the key components in the development of LLMs. Particularly, auto-regressive LLMs generate texts token by token, i.e., by predicting the next-token distribution given the previous ones, and thus tokenization directly affects their efficiency in text generation. Since each LLM has their own vocabulary as a set of possible tokens, they struggle to cooperate with each other at the level of next-token distributions such as model ensemble. In this paper, we establish a theoretical framework of lossless vocabulary reduction, which efficiently converts a given auto-regressive LLM into the one with an arbitrarily small vocabulary without any loss in accuracy. As an application, we demonstrate that LLMs with different tokenization can cooperate with each other efficiently through their maximal common vocabulary.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 5 tweets with 127 likes about this paper.