Papers
Topics
Authors
Recent
Search
2000 character limit reached

GEM-Style Constraints for PEFT with Dual Gradient Projection in LoRA

Published 5 Jan 2026 in cs.LG and cs.AI | (2601.02500v1)

Abstract: Full fine-tuning of LLMs is computationally costly, motivating Continual Learning (CL) approaches that utilize parameter-efficient adapters. We revisit Gradient Episodic Memory (GEM) within the Low-Rank Adapter (LoRA) subspace and introduce I-GEM: a fixed-budget, GPU-resident dual projected-gradient approximation to GEM's quadratic projection. By constraining non-interference solely within the adapter parameters, I-GEM preserves GEM-like stability with orders-of-magnitude lower mean projection overhead. On a 3-task AG News split with induced domain drift, using GPT-2 (355M) and LoRA ($r=8$), I-GEM matches GEM's average accuracy (within $\sim!0.04$ pts) and outperforms A-GEM by $\sim!1.4$ pts. Crucially, it reduces projection time vs.\ GEM by a factor of $\sim!103$. These results suggest that applying GEM constraints in the LoRA subspace is a practical pathway for continual learning at the LLM scale.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.