Papers
Topics
Authors
Recent
Search
2000 character limit reached

TEAL: New Selection Strategy for Small Buffers in Experience Replay Class Incremental Learning

Published 30 Jun 2024 in cs.LG | (2407.00673v2)

Abstract: Continual Learning is an unresolved challenge, whose relevance increases when considering modern applications. Unlike the human brain, trained deep neural networks suffer from a phenomenon called catastrophic forgetting, wherein they progressively lose previously acquired knowledge upon learning new tasks. To mitigate this problem, numerous methods have been developed, many relying on the replay of past exemplars during new task training. However, as the memory allocated for replay decreases, the effectiveness of these approaches diminishes. On the other hand, maintaining a large memory for the purpose of replay is inefficient and often impractical. Here we introduce TEAL, a novel approach to populate the memory with exemplars, that can be integrated with various experience-replay methods and significantly enhance their performance with small memory buffers. We show that TEAL enhances the average accuracy of existing class-incremental methods and outperforms other selection strategies, achieving state-of-the-art performance even with small memory buffers of 1-3 exemplars per class in the final task. This confirms our initial hypothesis that when memory is scarce, it is best to prioritize the most typical data. Code is available at this https URL: https://github.com/shahariel/TEAL.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.