Papers
Topics
Authors
Recent
Search
2000 character limit reached

GreenLLM: Disaggregating Large Language Model Serving on Heterogeneous GPUs for Lower Carbon Emissions

Published 29 Dec 2024 in cs.AR and cs.DC | (2412.20322v1)

Abstract: LLMs have been widely adopted across many real-world applications. However, their widespread use comes with significant environmental costs due to their high computational intensity and resource demands. Specifically, this has driven the development of new generations of high-performing GPUs, exacerbating the problem of electronic waste and accelerating the premature disposal of devices. To address this problem, this paper focuses on reducing the carbon emissions of LLM serving by reusing older, low-performing GPUs. We present GreenLLM, an SLO-aware LLM serving framework designed to minimize carbon emissions by reusing older GPUs. GreenLLM builds on two identified use cases that disaggregate specific computations onto older GPUs, reducing carbon emissions while meeting performance goals. To deepen our understanding of the potential carbon savings from disaggregation, we also provide a theoretical analysis of its relationship with carbon intensity and GPU lifetime. Our evaluations show that GreenLLM reduces carbon emissions by up to 40.6% compared to running standard LLM serving on new GPU only, meeting latency SLOs for over 90% of requests across various applications, latency requirements, carbon intensities, and GPU lifetimes.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (4)

Collections

Sign up for free to add this paper to one or more collections.