Papers
Topics
Authors
Recent
Search
2000 character limit reached

CLASSP: a Biologically-Inspired Approach to Continual Learning through Adjustment Suppression and Sparsity Promotion

Published 29 Apr 2024 in cs.NE, cs.AI, and cs.LG | (2405.09637v2)

Abstract: This paper introduces a new biologically-inspired training method named Continual Learning through Adjustment Suppression and Sparsity Promotion (CLASSP). CLASSP is based on two main principles observed in neuroscience, particularly in the context of synaptic transmission and Long-Term Potentiation (LTP). The first principle is a decay rate over the weight adjustment, which is implemented as a generalization of the AdaGrad optimization algorithm. This means that weights that have received many updates should have lower learning rates as they likely encode important information about previously seen data. However, this principle results in a diffuse distribution of updates throughout the model, as it promotes updates for weights that haven't been previously updated, while a sparse update distribution is preferred to leave weights unassigned for future tasks. Therefore, the second principle introduces a threshold on the loss gradient. This promotes sparse learning by updating a weight only if the loss gradient with respect to that weight is above a certain threshold, i.e. only updating weights with a significant impact on the current loss. Both principles reflect phenomena observed in LTP, where a threshold effect and a gradual saturation of potentiation have been observed. CLASSP is implemented in a Python/PyTorch class, making it applicable to any model. When compared with Elastic Weight Consolidation (EWC) using Computer Vision and sentiment analysis datasets, CLASSP demonstrates superior performance in terms of accuracy and memory footprint.

Authors (1)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. Z. Chen and B. Liu, “Continual learning and catastrophic forgetting,” in Lifelong Machine Learning.   Springer, 2018, pp. 55–75.
  2. F. Benzing, “Unifying regularisation methods for continual learning,” arXiv preprint arXiv:2006.06357, 2020.
  3. A. Aich, “Elastic weight consolidation (ewc): Nuts and bolts,” arXiv preprint arXiv:2105.04093, 2021.
  4. P. Kaushik, A. Gain, A. Kortylewski, and A. Yuille, “Understanding catastrophic forgetting and remembering in continual learning with optimal relevance mapping,” arXiv preprint arXiv:2102.11343, 2021.
  5. M. V. Kopanitsa, N. O. Afinowi, and S. G. Grant, “Recording long-term potentiation of synaptic transmission by three-dimensional multi-electrode arrays,” BMC neuroscience, vol. 7, pp. 1–19, 2006.
  6. N. Perez-Nieves and D. Goodman, “Sparse spiking gradient descent,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 795–11 808, 2021.
  7. J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization.” Journal of machine learning research, vol. 12, no. 7, 2011.
  8. Y.-C. Hsu, Y.-C. Liu, A. Ramasamy, and Z. Kira, “Re-evaluating continual learning scenarios: A categorization and case for strong baselines,” 2019.
  9. B. Wang, H. Zhang, Z. Ma, and W. Chen, “Convergence of adagrad for non-convex objectives: Simple proofs and relaxed assumptions,” in The Thirty Sixth Annual Conference on Learning Theory.   PMLR, 2023, pp. 161–190.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 3 tweets with 3 likes about this paper.