Papers
Topics
Authors
Recent
Search
2000 character limit reached

ISO: Overlap of Computation and Communication within Seqenence For LLM Inference

Published 4 Sep 2024 in cs.DC, cs.CL, cs.LG, and cs.PF | (2409.11155v1)

Abstract: In the realm of LLM inference, the inherent structure of transformer models coupled with the multi-GPU tensor parallelism strategy leads to a sequential execution of computation and communication. This results in substantial underutilization of computing resources during the communication phase. To mitigate this inefficiency, various techniques have been developed to optimize the use of computational power throughout the communication process. These strategies primarily involve overlapping matrix computations and communications, as well as interleaving micro-batches across different requests. Nonetheless, these approaches either fall short of achieving ideal overlap or impose certain limitations on their application. To overcome these challenges, this paper introduces a novel strategy for computation-communication overlap that operates at the sequence level. This method not only enhances the degree of overlap but also minimizes the constraints on its applicability. Experimental evaluations conducted using 30b/70b models have demonstrated significant improvements in efficiency. Specifically, the proposed technique has been shown to reduce time consumption by approximately 35% on 4090 GPU and by roughly 15% on A800 GPU during the prefill stage of LLM inference.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. ChatGPT: Optimizing Language Models for Dialogue, 2022. https://openai.com/blog/chatgpt/.
  2. SARATHI: Efficient LLM Inference by Piggybacking Decodes with Chunked Prefills. arXiv preprint arXiv:2308.16369, 2023.
  3. Language models are few-shot learners. In Advances in Neural Information Processing Systems, pages 1877–1901, 2020.
  4. Flux: Fast software-based communication overlap on gpus through kernel fusion, 2024.
  5. Liger: Interleaving Intra- and Inter-Operator Parallelism for Distributed Large Model Inference. https://dl.acm.org/doi/10.1145/3627535.3638466, 2024.
  6. Breaking the Computation and Communication Abstraction Barrier in Distributed Machine Learning Workloads. arXiv preprint arXiv:2105.05720, 2022.
  7. T3: Transparent Tracking & Triggering for Fine-grained Overlap of Compute & Collectives. arXiv preprint arXiv:2401.16677, 2024.
  8. Efficiently scaling transformer inference. Proceedings of Machine Learning and Systems 5 (2023)., 2023.
  9. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
  10. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.