Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach
Abstract: This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism, aimed at dynamically managing class information for better adaptation to streaming data. GCIL is one of the hot topics in the field of computer vision, and this is considered one of the crucial tasks in society, specifically the continual learning of generative models. The ability to forget is a crucial brain function that facilitates continual learning by selectively discarding less relevant information for humans. However, in the field of machine learning models, the concept of intentionally forgetting has not been extensively investigated. In this study we aim to bridge this gap by incorporating the forgetting mechanisms into GCIL, thereby examining their impact on the models' ability to learn in continual learning. Through our experiments, we have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge, underscoring the positive role that strategic forgetting plays in the process of continual learning.
- “A survey on evaluation of large language models,” ACM Transactions on Intelligent Systems and Technology, 2023.
- “Attention is all you need,” in Proceedings of Advances in Neural Information Processing Systems, 2017, pp. 5998–6008.
- “Denoising diffusion probabilistic models,” in Proceedings of Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 6840–6851.
- “A continual learning survey: Defying forgetting in classification tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 7, pp. 3366–3385, 2021.
- “Class-incremental learning: survey and performance evaluation on image classification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 5, pp. 5513–5533, 2022.
- “Overcoming catastrophic forgetting in neural networks,” in Proceedings of National Academy of Sciences, 2017, vol. 114, pp. 3521–3526.
- “Class-incremental learning with generative classifiers,” in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 3611–3620.
- “Generative feature replay for class-incremental learning,” in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 226–227.
- “Dcigan: a distributed class-incremental learning method based on generative adversarial networks,” in Proceedings of IEEE International Conference on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking, 2019, pp. 768–775.
- “Incremental concept learning via online generative memory recall,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 7, pp. 3206–3216, 2020.
- “Semantic segmentation with generative models: Semi-supervised learning and strong out-of-domain generalization,” in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 8300–8311.
- “Union-set multi-source model adaptation for semantic segmentation,” in Proceedings of European Conference on Computer Vision, 2022, pp. 579–595.
- “Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation,” in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22500–22510.
- “Selective amnesia: A continual learning approach to forgetting in deep generative models,” in Proceedings of Advances in Neural Information Processing Systems, 2023, pp. 1–25.
- Yael Niv, “Learning task-state representations,” Nature Neuroscience, vol. 22, no. 10, pp. 1544–1553, 2019.
- “Embracing change: Continual learning in deep neural networks,” Trends in Cognitive Sciences, vol. 24, no. 12, pp. 1028–1040, 2020.
- “Task-free continual learning,” in Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11254–11263.
- “Class clown: Data redaction in machine unlearning at enterprise scale,” arXiv preprint arXiv:2012.04699, 2020.
- “Empirical risk minimization in the non-interactive local model of differential privacy,” Journal of Machine Learning Research, vol. 21, no. 200, pp. 1–39, 2020.
- “Overcoming catastrophic forgetting for continual learning via model adaptation,” in Proceedings of International Conference on Learning Representations, 2019, pp. 1–13.
- “Gradient episodic memory for continual learning,” in Proceedings of Advances in Neural Information Processing Systems, 2017, vol. 30, pp. 6467–6476.
- “Overcoming catastrophic forgetting for multi-label class-incremental learning,” in Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 2389–2398.
- “Machine unlearning: A survey,” ACM Computing Surveys, vol. 56, no. 1, pp. 1–36, 2023.
- “Towards making systems forget with machine unlearning,” in Proceedings of 2015 IEEE Symposium on Security and privacy, 2015, pp. 463–480.
- “Continual learning with deep generative replay,” in Proceedings of Advances in Neural Information Processing Systems, 2017, vol. 30, pp. 2994–3003.
- “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.