Papers
Topics
Authors
Recent
Search
2000 character limit reached

Position-Aware Depth Decay Decoding ($D^3$): Boosting Large Language Model Inference Efficiency

Published 11 Mar 2025 in cs.CL | (2503.08524v1)

Abstract: Due to the large number of parameters, the inference phase of LLMs is resource-intensive. Unlike traditional model compression, which needs retraining, recent dynamic computation methods show that not all components are required for inference, enabling a training-free pipeline. In this paper, we focus on the dynamic depth of LLM generation. A token-position aware layer skipping framework is proposed to save 1.5x times operations efficiently while maintaining performance. We first observed that tokens predicted later have lower perplexity and thus require less computation. Then, we propose a training-free algorithm called Position-Aware Depth Decay Decoding ($D3$), which leverages a power-law decay function, $\left\lfloor L \times (\alphai) \right\rfloor$, to determine the number of layers to retain when generating token $T_i$. Remarkably, without any retraining, the $D3$ achieves success across a wide range of generation tasks for the first time. Experiments on LLMs (\ie the Llama) with $7 \sim 70$ billion parameters show that $D3$ can achieve an average 1.5x speedup compared with the full-inference pipeline while maintaining comparable performance with nearly no performance drop ($<1\%$) on the GSM8K and BBH benchmarks.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.