Papers
Topics
Authors
Recent
Search
2000 character limit reached

RingAda: Pipelining Large Model Fine-Tuning on Edge Devices with Scheduled Layer Unfreezing

Published 27 Feb 2025 in cs.DC | (2502.19864v1)

Abstract: To enable large model (LM) based edge intelligent service provisioning, on-device fine-tuning with locally personalized data allows for continuous and privacy-preserving LM customization. In this paper, we propose RingAda, a collaborative training framework designed for fine-tuning transformer-based LMs on edge devices. Particularly, RingAda performs parameter-efficient adapter fine-tuning across a set of interconnected edge devices, forming a ring topology for per-batch training by sequentially placing frozen transformer blocks and their trainable adapter modules on the devices. RingAda follows a novel pipeline-parallel training mechanism with top-down adapter unfreezing, allowing for early-stopping of backpropagation at the lowest unfrozen adapter layer, thereby accelerating the fine-tuning process. Extensive experimental results demonstrate that RingAda significantly reduces fine-tuning time and memory costs while maintaining competitive model performance compared to its peer designs.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.