Papers
Topics
Authors
Recent
Search
2000 character limit reached

Back to the Continuous Attractor

Published 31 Jul 2024 in q-bio.NC, cs.NE, and nlin.AO | (2408.00109v3)

Abstract: Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (89)
  1. Population dynamics of head-direction neurons during drift and reorientation. Nature, 615(7954):892–899, 2023.
  2. Low-dimensional neural manifolds for the control of constrained and unconstrained movements. bioRxiv, pages 2023–05, 2023.
  3. From fixed points to chaos: Three models of delayed discrimination. Progress in neurobiology, 103:214–222, 2013.
  4. J. T. Barron. Continuously differentiable exponential linear units. arXiv preprint arXiv:1704.07483, 2017.
  5. Shaping dynamics with multiple populations in low-rank recurrent networks. Neural Computation, 33(6):1572–1615, 2021.
  6. Parametric control of flexible timing through low-dimensional neural manifolds. Neuron, 111(5):739–753, 2023.
  7. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences, 92(9):3844–3848, 1995.
  8. T. Biswas and J. E. Fitzgerald. Geometric framework to predict structure from function in neural networks. Physical review research, 4(2):023255, 2022.
  9. Predictive coding of dynamical variables in balanced spiking networks. PLoS computational biology, 9(11):e1003258, Nov. 2013. ISSN 1553-734X, 1553-7358. doi: 10.1371/journal.pcbi.1003258.
  10. A continuous attractor network model without recurrent excitation: Maintenance and integration in the head direction cell system. Journal of computational neuroscience, 18(2):205–227, 2005.
  11. R. Chaudhuri and I. Fiete. Computational principles of memory. Nature neuroscience, 19(3):394, 2016.
  12. C. Chicone. Ordinary Differential Equations with Applications. Springer Science & Business Media, Sept. 2006. ISBN 9780387357942.
  13. Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), 2015.
  14. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cerebral cortex, 10(9):910–923, 2000.
  15. Emergence of functional and structural properties of the head direction system by optimization of recurrent neural networks. arXiv preprint, 2019.
  16. Recurrent neural network models for working memory of continuous variables: Activity manifolds, connectivity patterns, and dynamic codes. arXiv preprint arXiv:2111.01275, 2021.
  17. P. Dayan and L. F. Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural systems. 2001.
  18. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. bioRxiv, pages 2022–08, 2022.
  19. S. Druckmann and D. B. Chklovskii. Neuronal circuits underlying persistent representations despite time varying activity. Current Biology, 22(22):2095–2103, 2012.
  20. C. Ehresmann. Les connexions infinitésimales dans un espace fibré différentiable. In Colloque de topologie, Bruxelles, volume 29, pages 55–75, 1950.
  21. Flexible integration of continuous sensory evidence in perceptual estimation tasks. Proceedings of the National Academy of Sciences, 119(45):e2214441119, 2022.
  22. A. Fanthomme and R. Monasson. Low-dimensional manifolds support multiplexed integrations in recurrent neural networks. Neural Computation, 33(4):1063–1112, 2021.
  23. N. Fenichel and J. Moser. Persistence and smoothness of invariant manifolds for flows. Indiana University Mathematics Journal, 21(3):193–226, 1971.
  24. G. B. Folland. Real analysis: Modern techniques and their applications, volume 40. John Wiley & Sons, 1999.
  25. K. v. Frisch. The dance language and orientation of bees. Harvard University Press, 1993.
  26. S. Fusi and L. F. Abbott. Limits on the memory storage capacity of bounded synapses. Nature neuroscience, 10(4):485–493, Apr. 2007. ISSN 1097-6256,1546-1726. doi: 10.1038/nn1859.
  27. E. Ghazizadeh and S. Ching. Slow manifolds within network dynamics encode working memory efficiently and robustly. PLoS computational biology, 17(9):e1009366, 2021.
  28. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249–256. jmlr.org, 2010.
  29. M. S. Goldman. Memory without feedback in a neural network. Neuron, 61(4):621–634, Feb. 2009. ISSN 0896-6273, 1097-4199. doi: 10.1016/j.neuron.2008.12.012.
  30. Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cerebral cortex, 13(11):1185–1195, Nov. 2003. ISSN 1047-3211. doi: 10.1093/cercor/bhg095.
  31. M. Golubitsky and I. Stewart. The Symmetry Perspective: From Equilibrium to Chaos in Phase Space and Physical Space. Number 200 in Progress in Mathematics. Birkhäuser. ISBN 978-3-7643-6609-4.
  32. Modeling attractor deformation in the rodent head-direction system. Journal of neurophysiology, 83(6):3402–3410, 2000.
  33. J. Gu and S. Lim. Unsupervised learning for robust working memory. PLoS Computational Biology, 18(5):e1009083, 2022.
  34. V. Guillemin and A. Pollack. Differential topology, volume 370. American Mathematical Soc., 2010.
  35. On the impact of the activation function on deep neural networks training. In International conference on machine learning, pages 2672–2680. PMLR, 2019.
  36. M. W. Hirsch and B. Baird. Computing with dynamic attractors in neural networks. Biosystems, 34(1-3):173–195, 1995.
  37. Differential equations, dynamical systems, and an introduction to chaos. Academic press, 2013.
  38. B. K. Hulse and V. Jayaraman. Mechanisms underlying the neural computation of head direction. Annual Review of Neuroscience, 43:31–54, 2020.
  39. How important are activation functions in regression and classification? a survey, performance comparison, and future directions. Journal of Machine Learning for Modeling and Computing, 4(1), 2023.
  40. C. K. R. T. Jones. Geometric singular perturbation theory. In L. Arnold, C. K. R. T. Jones, K. Mischaikow, G. Raugel, and R. Johnson, editors, Dynamical Systems: Lectures Given at the 2nd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) held in Montecatini Terme, Italy, June 13–22, 1994, pages 44–118. Springer Berlin Heidelberg, Berlin, Heidelberg, 1995. ISBN 9783540494157. doi: 10.1007/BFb0095239.
  41. Ring attractor dynamics emerge from a spiking model of the entire protocerebral bridge. Frontiers in behavioral neuroscience, 11:8, 2017.
  42. M. Khona and I. R. Fiete. Attractor and integrator networks in the brain. Nature reviews. Neuroscience, 23(12):744–766, Dec. 2022. ISSN 1471-003X, 1471-0048. doi: 10.1038/s41583-022-00642-0.
  43. Generation of stable heading representations in diverse visual scenes. Nature, 576(7785):126–131, 2019.
  44. J. J. Knierim and K. Zhang. Attractor dynamics of spatially correlated neural activity in the limbic system. Annual review of neuroscience, 35:267–285, 2012.
  45. Model for a robust neural integrator. Nature neuroscience, 5(8):775–782, Aug. 2002. ISSN 1097-6256. doi: 10.1038/nn893.
  46. S. Lim and M. S. Goldman. Noise tolerance of attractor and feedforward memory models. Neural computation, 24(2):332–390, Feb. 2012. ISSN 0899-7667, 1530-888X. doi: 10.1162/NECO\_a\_00234.
  47. S. Lim and M. S. Goldman. Balanced cortical microcircuitry for maintaining information in working memory. Nature neuroscience, 16(9):1306–1314, Sept. 2013. ISSN 1097-6256, 1546-1726. doi: 10.1038/nn.3492.
  48. R. Mañé. Persistent manifolds are normally hyperbolic. Transactions of the American Mathematical Society, 246:261–283, 1978.
  49. Universality and individuality in neural dynamics across large populations of recurrent networks. Advances in neural information processing systems, 32, 2019.
  50. R. Mañé. A proof of the c1superscript𝑐1c^{1}italic_c start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT stability conjecture. Publications Mathématiques de l’IHÉS, 66:161–210, 1987.
  51. Context-dependent computation by recurrent dynamics in prefrontal cortex. nature, 503(7474):78–84, 2013.
  52. F. Mastrogiuseppe and S. Ostojic. Linking connectivity, dynamics, and computations in low-rank recurrent neural networks. Neuron, 99(3):609–623, 2018.
  53. Diversity of emergent dynamics in competitive threshold-linear networks. SIAM Journal on Applied Dynamical Systems, 23(1):855–884, 2024.
  54. An approximate line attractor in the hypothalamus encodes an aggressive state. Cell, 186(1):178–193.e15, Jan. 2023. ISSN 0092-8674. doi: 10.1016/j.cell.2022.11.027.
  55. Accurate angular integration with only a handful of neurons. bioRxiv, 2022. doi: 10.1101/2022.05.23.493052.
  56. A diverse range of factors affect the nature of neural representations underlying short-term memory. Nature neuroscience, 22(2):275–283, 2019.
  57. Error-correcting dynamics in visual working memory. Nature communications, 10(1):3366, 2019.
  58. Persistent learning signals and working memory without continuous attractors. Aug. 2023.
  59. Automatic differentiation in PyTorch. In NIPS-W, 2017.
  60. Neural dynamics and architecture of the heading direction circuit in zebrafish. Nature neuroscience, 26(5):765–773, 2023.
  61. E. Pollock and M. Jazayeri. Engineering recurrent neural networks from task-relevant manifolds and dynamics. PLoS computational biology, 16(8):e1008128, 2020.
  62. R. Prohens and A. E. Teruel. Canard trajectories in 3D piecewise linear systems. Discrete Contin. Dyn. Syst, 33(3):4595–4611, 2013.
  63. Slow–fast n-dimensional piecewise linear differential systems. Journal of Differential Equations, 260(2):1865–1892, 2016.
  64. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
  65. A coupled attractor model of the rodent head direction system. Network: Computation in Neural Systems, 7(4):671–685, 1996.
  66. Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron, 98(5):1005–1019, 2018.
  67. Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron, 38(3):473–485, May 2003. ISSN 0896-6273. doi: 10.1016/s0896-6273(03)00255-1.
  68. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature, 399(6735):470–473, June 1999. ISSN 0028-0836. doi: 10.1038/20939.
  69. A. Samsonovich and B. L. McNaughton. Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience, 17(15):5900–5920, 1997.
  70. Efficient low-dimensional approximation of continuous attractor networks. arXiv preprint arXiv:1711.08032, 2017.
  71. H. S. Seung. How the brain keeps the eyes still. Proceedings of the National Academy of Sciences, 93(23):13339–13344, 1996. ISSN 0027-8424, 1091-6490. doi: 10.1073/pnas.93.23.13339.
  72. H. S. Seung. Learning continuous attractors in recurrent networks. Advances in neural information processing systems, 10, 1997.
  73. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 26(1):259–271, Apr. 2000. ISSN 0896-6273. doi: 10.1016/s0896-6273(00)81155-1.
  74. Computational roles of intrinsic synaptic dynamics. 70:34–42, 2021. ISSN 09594388. doi: 10.1016/j.conb.2021.06.002.
  75. D. J. Simpson. Dimension reduction for slow-fast, piecewise-smooth, continuous systems of ODEs. arXiv preprint arXiv:1801.04653, 2018.
  76. Chaos in random neural networks. Physical review letters, 61(3):259, 1988.
  77. D. Sussillo. Neural circuits as computational dynamical systems. Current opinion in neurobiology, 25:156–163, 2014.
  78. D. Sussillo and O. Barak. Opening the black box: Low-dimensional dynamics in high-dimensional recurrent neural networks. Neural computation, 25(3):626–649, 2013.
  79. J. S. Taube. The head direction signal: Origins and sensory-motor integration. Annu. Rev. Neurosci., 30:181–207, 2007.
  80. M. Tsodyks and T. Sejnowski. Associative memory and hippocampal place cells. International journal of neural systems, 6:81–86, 1995.
  81. Angular velocity integration in a fly heading circuit. Elife, 6:e23496, 2017.
  82. The neuroanatomical ultrastructure and function of a biological ring attractor. Neuron, 108(1):145–163, 2020.
  83. Learning accurate path integration in ring attractor models of the head direction system. Elife, 11:e69841, 2022.
  84. Computation through neural population dynamics. Annual review of neuroscience, 43(1):249–275, July 2020. ISSN 0147-006X. doi: 10.1146/annurev-neuro-092619-094115.
  85. S. Wiggins. Normally hyperbolic invariant manifolds in dynamical systems, volume 105. Springer Science & Business Media, 1994.
  86. Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nature neuroscience, 17(3):431–439, 2014.
  87. A brainstem integrator for self-location memory and positional homeostasis in zebrafish. Cell, 185(26):5011–5027, 2022.
  88. Task representations in neural networks trained to perform many cognitive tasks. Nature Neuroscience, 22(2):297–306, 2019. doi: 10.1038/s41593-018-0310-2.
  89. K. Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. Journal of Neuroscience, 16(6):2112–2126, 1996.
Citations (2)

Summary

  • The paper demonstrates that approximate continuous attractors enable robust analog memory despite inherent perturbations in neural dynamics.
  • It employs fast-slow decomposition and persistent manifold theory to theoretically prove and simulate attractor robustness in recurrent neural networks.
  • Numerical experiments validate the framework by showing stable attractor dynamics in tasks like head-direction estimation and memory-guided saccades.

Detailed Summary of "Back to the Continuous Attractor" (2408.00109)

Introduction

The paper investigates the concept of continuous attractors in the context of neural models of analog memory. Continuous attractors are mathematical constructs that allow the storage of continuous-valued variables in the form of recurrent system states which are stable over long periods. Such attractors are integral to the theoretical understanding of persistent neural activity that characterizes various cognitive functions, such as working memory and sensory evidence accumulation. However, continuous attractors are highly sensitive to disruptions in the underlying dynamical laws, limiting their applicability in biological systems that experience constant perturbations. The study aims to elucidate why, despite their theoretical fragility, continuous attractors remain a useful and robust analogy for modeling analog memory by exploring their bifurcations in theoretical models.

Critique of Pure Continuous Attractors

Continuous attractors in neural models are prone to bifurcations due to infinitesimal changes in the parameters governing their dynamics. This sensitivity, referred to as the "fine-tuning problem," is a major hurdle in their application to biological analog working memory. There are two primary sources of perturbations: the stochastic nature of synaptic plasticity and fluctuations in synaptic weights. The authors critique the brittle nature of continuous attractors while emphasizing the persistent manifold theory, which provides a framework for understanding how these structures can be employed effectively even as they undergo bifurcations.

Theoretical Framework

The authors utilize the persistent manifold theory, which explains how, even under perturbations, systems exhibit approximate continuous attractors. This framework relies on separating time scales in dynamics where fast and slow variables are decoupled. Through a fast-slow decomposition, it is demonstrated that the manifold which survives bifurcation retains similar qualitative characteristics, suggesting functional robustness. The paper provides mathematical proofs and numerical simulations to support this theory, arguing that such manifolds can survive perturbations and remain effective analog memory systems.

Numerical Experiments on Recurrent Neural Networks

The study employs recurrent neural networks (RNNs) to simulate scenarios of analog memory tasks to demonstrate the practical viability of their theory on approximate continuous attractors. The networks are trained on tasks like head-direction estimation and memory-guided saccades. The analysis indicates that trained RNNs develop approximate continuous attractors that align with the paper's theoretical predictions. Several topologies for these approximate attractors are observed, ranging from fixed-point networks to limit cycles, each showing stability and attraction properties akin to continuous attractors despite inherent perturbations.

Implications and Future Directions

The implications of this research are significant for the fields of theoretical neuroscience and artificial intelligence. The study suggests that biological systems and neural networks might not directly implement perfect continuous attractors. Instead, they operate in a regime where approximate attractors arise naturally due to the robustness of slow manifolds. This finding opens avenues for designing more resilient computational models that capture the essence of analog memory while allowing for real-world perturbations. Future work might explore how these principles can be applied to wider classes of neural computations or leveraged in practical applications in machine learning where robustness against small perturbations is crucial.

Conclusion

The research reaffirms the utility of continuous attractors as a conceptual tool in understanding analog memory, emphasizing their functional robustness rather than mathematical fragility. The paper's theoretical contributions, bolstered by experimental validation using RNNs, indicate that systems proximal to continuous attractors can achieve similar memory performance. This insight holds promise for advancing neural computation models used in both theoretical research and practical AI applications, ensuring they remain effective under a broad range of conditions.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 6 tweets with 120 likes about this paper.