Papers
Topics
Authors
Recent
Search
2000 character limit reached

Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity

Published 25 Jun 2024 in cs.DC and cs.LG | (2406.17477v3)

Abstract: Low-rank adaptation (LoRA) offers an efficient alternative to full-weight adaptation in federated fine-tuning of LLMs, significantly reducing computational costs. By adjusting ranks for each client, federated LoRA enables flexible resource allocation. However, we observe that heterogeneous ranks among clients lead to unstable performance. Our analysis attributes this instability to the conventional zero-padding aggregation strategy, which dilutes information from high-rank clients during model aggregation. To address this issue, we propose a replication-based padding strategy that better retains valuable information from clients with high-quality data. Empirically, this approach accelerates convergence and enhances the global model's predictive performance.

Citations (1)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.