Papers
Topics
Authors
Recent
Search
2000 character limit reached

On the Effect of Token Merging on Pre-trained Models for Code

Published 19 Jul 2025 in cs.SE | (2507.14423v1)

Abstract: Tokenization is a fundamental component of LLMs for code. It involves breaking down the input into units that are later passed to the LLM stack to learn high-dimensional representations used in various contexts, from classification to generation. However, the output of these tokenizers is often longer than that traditionally used in compilers and interpreters. This could result in undesirable effects, such as increased computational overhead. In this work, we investigate the effect of merging the hidden representations of subtokens that belong to the same semantic unit, such as subtokens that form a single identifier. We propose two strategies: one based on averaging the representations and another that leverages a learning-based approach. Both methods can be seamlessly integrated with existing LLMs for code. We conduct experiments using six LLMs for code: CodeBERT, GraphCodeBERT, UniXCoder, CdoeT5, CodeT5+ (220M), and CodeT5+ (770M), across three software engineering tasks: vulnerability detection, code classification, and code translation. Results show that these strategies can reduce the number of floating-point operations by $1\%$ to $19\%$. Regarding downstream performance, the most significant degradation was observed in the vulnerability detection task, where the F1 score decreased by $1.82$ points compared to the baseline. In contrast, for code translation, we observed an improvement of $2.47$ points in CodeBLEU. This work contributes to the broader effort of improving LLMs for code across multiple dimensions, including both computational efficiency and downstream performance.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 4 likes about this paper.