Papers
Topics
Authors
Recent
Search
2000 character limit reached

Neuron-centric Hebbian Learning

Published 16 Feb 2024 in cs.NE, cs.AI, and cs.LG | (2403.12076v2)

Abstract: One of the most striking capabilities behind the learning mechanisms of the brain is the adaptation, through structural and functional plasticity, of its synapses. While synapses have the fundamental role of transmitting information across the brain, several studies show that it is the neuron activations that produce changes on synapses. Yet, most plasticity models devised for artificial Neural Networks (NNs), e.g., the ABCD rule, focus on synapses, rather than neurons, therefore optimizing synaptic-specific Hebbian parameters. This approach, however, increases the complexity of the optimization process since each synapse is associated to multiple Hebbian parameters. To overcome this limitation, we propose a novel plasticity model, called Neuron-centric Hebbian Learning (NcHL), where optimization focuses on neuron- rather than synaptic-specific Hebbian parameters. Compared to the ABCD rule, NcHL reduces the parameters from $5W$ to $5N$, being $W$ and $N$ the number of weights and neurons, and usually $N \ll W$. We also devise a ``weightless'' NcHL model, which requires less memory by approximating the weights based on a record of neuron activations. Our experiments on two robotic locomotion tasks reveal that NcHL performs comparably to the ABCD rule, despite using up to $\sim97$ times less parameters, thus allowing for scalable plasticity

Summary

  • The paper's main contribution is introducing a neuron-centric learning model that reduces parameter count from 5W to 5N, making optimization scalable.
  • It employs a weightless model that estimates synaptic weights via historical neuron activations, enhancing memory efficiency and robustness.
  • Experimental results in robotic locomotion tasks show performance comparable to traditional methods with up to 97x fewer parameters.

Neuron-centric Hebbian Learning

Introduction

The concept of Hebbian learning originates from neuroscience, where it describes the process by which neurons that frequently activate together strengthen their synaptic connections. This principle has been adapted to artificial neural networks, particularly through the Hebbian synaptic-centric models such as the ABCD rule. However, these models focus primarily on synapse-specific adaptations, which entail optimizing numerous parameters proportional to the number of synapses in a network. This can lead to complex optimization challenges, particularly when scaling up to larger networks.

To address this challenge, the paper introduces Neuron-centric Hebbian Learning (NcHL), a novel plasticity model that shifts the focus from synapses to neurons. The NcHL model reduces the parameter complexity by concentrating on neuron-specific adaptations rather than synaptic adjustments. This approach not only simplifies the optimization process but also closely aligns with biological plausibility, as neurons are central elements of adaptation in biological systems.

Methodology

Neuron-centric Hebbian Learning Model

NcHL proposes a neuron-centric approach, optimizing parameters associated with neurons rather than synapses. This reduces the parameters from $5W$ (where WW is the number of synapses) to $5N$ (where NN is the number of neurons), acknowledging that typically N≪WN \ll W. The update rule for NcHL considers both pre- and post-synaptic neuron activations, allowing for a comprehensive adaptation mechanism that maintains the local plasticity characteristic of Hebbian learning.

Weightless Model

An extension of NcHL is the "weightless" model, which approximates synaptic weights based on the record of neuron activations rather than maintaining explicit weight values. This model utilizes a memory-efficient approach by leveraging historical neuron activations to compute the weight updates dynamically, thus reducing the storage requirements and facilitating scalable plasticity in larger network architectures.

Experimental Evaluation

The paper evaluates NcHL and its weightless variant in simulated robotic locomotion tasks, demonstrating that NcHL performs comparably to the traditional ABCD rule but utilizes significantly fewer parameters. In particular, experiments on two robotic tasks reveal that the neuron-centric approach can achieve performance akin to synaptic-centric models while using up to 97 times fewer parameters. This finding underscores the efficiency and scalability of NcHL, particularly in environments where computational resources are constrained.

Implications and Future Directions

Practical and Theoretical Implications

The shift to a neuron-centric model offers profound implications for the design and optimization of artificial neural networks. By reducing the parameter load, NcHL enhances the scalability and adaptability of networks, making it feasible to deploy large-scale networks in resource-constrained settings. Moreover, adopting a neuron-focused perspective aligns more closely with biological paradigms, potentially leading to more robust and bio-inspired computational models.

Future Developments

Future developments in neuron-centric plasticity models could explore integration with deep learning frameworks and hybrid architectures that combine neuron-centric and synaptic-centric characteristics. Additionally, extending the weightless approximation to dynamic environments and more complex tasks could further demonstrate the versatility and capability of neuron-centric approaches in evolving AI systems.

Conclusion

Neuron-centric Hebbian Learning introduces a paradigm shift in the optimization and scalability of artificial neural networks by focusing on neuron-specific adaptations. This approach not only simplifies the optimization landscape but also aligns with biological plausibility, paving the way for more efficient and adaptable network designs. The experimental results confirm its viability and potential for future developments in scalable AI systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 2 tweets with 12 likes about this paper.