Papers
Topics
Authors
Recent
Search
2000 character limit reached

OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning

Published 27 Mar 2024 in cs.CV | (2403.18550v1)

Abstract: Few-Shot Class-Incremental Learning (FSCIL) introduces a paradigm in which the problem space expands with limited data. FSCIL methods inherently face the challenge of catastrophic forgetting as data arrives incrementally, making models susceptible to overwriting previously acquired knowledge. Moreover, given the scarcity of labeled samples available at any given time, models may be prone to overfitting and find it challenging to strike a balance between extensive pretraining and the limited incremental data. To address these challenges, we propose the OrCo framework built on two core principles: features' orthogonality in the representation space, and contrastive learning. In particular, we improve the generalization of the embedding space by employing a combination of supervised and self-supervised contrastive losses during the pretraining phase. Additionally, we introduce OrCo loss to address challenges arising from data limitations during incremental sessions. Through feature space perturbations and orthogonality between classes, the OrCo loss maximizes margins and reserves space for the following incremental data. This, in turn, ensures the accommodation of incoming classes in the feature space without compromising previously acquired knowledge. Our experimental results showcase state-of-the-art performance across three benchmark datasets, including mini-ImageNet, CIFAR100, and CUB datasets. Code is available at https://github.com/noorahmedds/OrCo

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Few-shot class incremental learning leveraging self-supervised features. In CVPR Workshops, 2022.
  2. Il2m: Class incremental learning with dual memory. In ICCV, 2019.
  3. End-to-end incremental learning. In ECCV, 2018.
  4. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In ECCV, 2018.
  5. Perfectly balanced: Improving transfer and robustness of supervised contrastive learning. In ICML. PMLR, 2022.
  6. A simple framework for contrastive learning of visual representations. In ICML. PMLR, 2020.
  7. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019.
  8. Semantic-aware knowledge distillation for few-shot class-incremental learning. In CVPR, 2021.
  9. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
  10. Randaugment: Practical automated data augmentation with a reduced search space. In CVPR workshops, 2020.
  11. Continual prototype evolution: Learning online from non-stationary data streams. In ICCV, 2021.
  12. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML. PMLR, 2017.
  13. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
  14. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010.
  15. Deep residual learning for image recognition. In CVPR, 2016.
  16. Constrained few-shot class-incremental learning. In CVPR, 2022.
  17. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
  18. Learning a unified classifier incrementally via rebalancing. In CVPR, 2019.
  19. A broad study on the transferability of visual representations with contrastive learning. In ICCV, 2021.
  20. Fearnet: Brain-inspired model for incremental learning. In ICLR, 2018.
  21. Supervised contrastive learning. NIPS, 2020.
  22. Warping the space: Weight space rotation for class-incremental few-shot learning. In ICLR, 2022.
  23. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 2017.
  24. Learning multiple layers of features from tiny images. 2009.
  25. Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics quarterly, 1955.
  26. Generalized and incremental few-shot learning by explicit learning and calibration without forgetting. in 2021 ieee. In ICCV, 2021.
  27. Targeted supervised contrastive learning for long-tailed recognition. In CVPR, pages 6918–6928, 2022.
  28. Learning without forgetting. IEEE TPAMI, 2017.
  29. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, 2018.
  30. Few-shot lifelong learning. In AAAI, 2021.
  31. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation. Elsevier, 1989.
  32. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018.
  33. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
  34. Low-shot learning with imprinted weights. In CVPR, 2018.
  35. icarl: Incremental classifier and representation learning. In CVPR, 2017.
  36. Imagenet large scale visual recognition challenge. IJCV, 115, 2015.
  37. Progress & compress: A scalable framework for continual learning. In ICML. PMLR, 2018.
  38. Prototypical networks for few-shot learning. NIPS, 30, 2017.
  39. Few-shot class-incremental learning. In CVPR, 2020.
  40. A survey on few-shot class-incremental learning. Neural Networks, 2024.
  41. Matching networks for one shot learning. NIPS, 29, 2016.
  42. The caltech-ucsd birds-200-2011 dataset. 2011.
  43. Large scale incremental learning. In CVPR, 2019.
  44. Neural collapse inspired feature-classifier alignment for few-shot class-incremental learning. In ICLR, 2023.
  45. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888, 2017.
  46. Few-shot incremental learning with continually evolved classifiers. In CVPR, 2021.
  47. Maintaining discrimination and fairness in class incremental learning. In CVPR, 2020.
  48. Mgsvf: Multi-grained slow vs. fast framework for few-shot class-incremental learning. IEEE TPAMI, 2021.
  49. Few-shot class-incremental learning via class-aware bilateral distillation. In CVPR, 2023.
  50. Forward compatible few-shot class-incremental learning. In CVPR, 2022a.
  51. Few-shot class-incremental learning by sampling multi-phase tasks. IEEE TPAMI, 2022b.
  52. Prototype augmentation and self-supervision for incremental learning. In CVPR, 2021.
Citations (6)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.