Papers
Topics
Authors
Recent
Search
2000 character limit reached

Efficient Pretraining Length Scaling

Published 21 Apr 2025 in cs.CL | (2504.14992v2)

Abstract: Recent advances in LLMs have demonstrated the effectiveness of length scaling during post-training, yet its potential in pre-training remains underexplored. We present the Parallel Hidden Decoding Transformer (\textit{PHD}-Transformer), a novel framework that enables efficient length scaling during pre-training while maintaining inference efficiency. \textit{PHD}-Transformer achieves this through an innovative KV cache management strategy that distinguishes between original tokens and hidden decoding tokens. By retaining only the KV cache of original tokens for long-range dependencies while immediately discarding hidden decoding tokens after use, our approach maintains the same KV cache size as the vanilla transformer while enabling effective length scaling. To further enhance performance, we introduce two optimized variants: \textit{PHD-SWA} employs sliding window attention to preserve local dependencies, while \textit{PHD-CSWA} implements chunk-wise sliding window attention to eliminate linear growth in pre-filling time. Extensive experiments demonstrate consistent improvements across multiple benchmarks.

Summary

Efficient Pretraining Length Scaling

The paper addresses an under-explored opportunity in the pre-training phase of large language models (LLMs) by leveraging length scaling, traditionally applied in the post-training stage. The authors propose a novel framework called the Parallel Hidden Decoding Transformer (PHD-Transformer), designed to efficiently manage length scaling during pre-training while preserving inference efficiency. This approach involves a sophisticated management strategy for key-value (KV) caches, enabling length scaling without increasing the computational footprint typically associated with other methods.

Overview of Methods and Contributions

At the core of the PHD-Transformer's architecture is an innovative KV cache management strategy. The authors differentiate between original tokens and hidden decoding tokens during training. This distinction is crucial, as only the original tokens' KV caches are retained for the modeling of long-range dependencies, while the KV caches of hidden decoding tokens are discarded immediately post-use. This results in maintaining the same KV cache size as the vanilla transformer, despite the repeated token input allowing for effective length scaling.

To bolster performance, the paper introduces two enhanced variants of the PHD-Transformer:

  1. PHD-SWA (Sliding Window Attention): This variant uses sliding window attention to maintain local dependencies efficiently. It requires only constant additional KV cache memory, which significantly boosts performance without sacrificing efficiency.

  2. PHD-CSWA (Chunk-wise Sliding Window Attention): By imposing chunk-wise constraints, this variant circumvents the linear growth in pre-filling time observed in PHD-SWA, as dependencies between tokens are limited to within each chunk.

Experimental Findings

Through extensive experiments, the paper demonstrates consistent superiority of the PHD-Transformer series across various benchmarks. Notably, the approach is shown to enhance performance on tasks measuring reasoning and mathematical problem-solving abilities. The empirical evaluation confirms that both loss scales and performance scales correlate strongly with the token repetition count. Importantly, both PHD-SWA and PHD-CSWA introduce only marginal increases in decoding latency, thus underscoring the efficiency of the approach.

Key results include training loss reductions and measurable accuracy improvements in benchmarks such as ARC, HellaSwag, PIQA, and Winogrande, with effectiveness shown for scaling factors of up to 256. The PHD-CSWA variant is particularly highlighted for balancing computational cost and performance benefits without imposing significant latency overhead.

Implications and Future Directions

The implications of this research are significant for the field. By achieving pre-training length scaling efficiently, the PHD-Transformer framework offers a viable pathway to further enhancing the capabilities of LLMs. The efficient use of KV cache and innovative token management strategies could lead to the next iteration of LLM development, focusing on maximizing computational depth while minimizing resource expenditure.

Practically, this methodology could be adopted in industrial applications requiring high efficiency and performance, such as real-time language processing systems and embedded AI solutions. Theoretically, it opens avenues for exploring deeper integrations of transformer architecture modifications and advanced attention patterns.

Overall, the presented research delivers critical insights and a robust framework for extending the scope and applicability of length scaling in LLMs, warranting further exploration and refinement in subsequent studies.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 4 tweets with 19 likes about this paper.