Papers
Topics
Authors
Recent
Search
2000 character limit reached

Structure of activity in multiregion recurrent neural networks

Published 19 Feb 2024 in q-bio.NC, cond-mat.dis-nn, and cs.NE | (2402.12188v3)

Abstract: Neural circuits comprise multiple interconnected regions, each with complex dynamics. The interplay between local and global activity is thought to underlie computational flexibility, yet the structure of multiregion neural activity and its origins in synaptic connectivity remain poorly understood. We investigate recurrent neural networks with multiple regions, each containing neurons with random and structured connections. Inspired by experimental evidence of communication subspaces, we use low-rank connectivity between regions to enable selective activity routing. These networks exhibit high-dimensional fluctuations within regions and low-dimensional signal transmission between them. Using dynamical mean-field theory, with cross-region currents as order parameters, we show that regions act as both generators and transmitters of activity -- roles that are often in tension. Taming within-region activity can be crucial for effective signal routing. Unlike previous models that suppressed neural activity to control signal flow, our model achieves routing by exciting different high-dimensional activity patterns through connectivity structure and nonlinear dynamics. Our analysis offers insights into multiregion neural data and trained neural networks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (23)
  1. D. J. Felleman and D. C. Van Essen, Distributed hierarchical processing in the primate cerebral cortex., Cerebral cortex (New York, NY: 1991) 1, 1 (1991).
  2. G. Okazawa and R. Kiani, Neural Mechanisms That Make Perceptual Decisions Flexible (2023).
  3. C. Fang and K. L. Stachenfeld, Predictive auxiliary objectives in deep rl mimic learning in the brain, arXiv preprint arXiv:2310.06089  (2023).
  4. T. A. Machado, I. V. Kauvar, and K. Deisseroth, Multiregion neuronal activity: the forest and the trees (2022).
  5. M. G. Perich and K. Rajan, Rethinking brain-wide interactions through multi-region ‘network of networks’ models, Current opinion in neurobiology 65, 146 (2020).
  6. U. Pereira-Obilinovic, J. Aljadeff, and N. Brunel, Forgetting Leads to Chaos in Attractor Networks, Physical Review X 13, 10.1103/PhysRevX.13.011009 (2023), arXiv:2112.00119 .
  7. K. Rajan, L. Abbott, and H. Sompolinsky, Stimulus-dependent suppression of chaos in recurrent neural networks, Physical review e 82, 011903 (2010).
  8. L. Abbott, Where are the switches on this thing, 23 Problems in systems neuroscience , 423 (2006).
  9. A. Cichocki, Tensor networks for big data analytics and large-scale optimization problems, arXiv preprint arXiv:1407.3124  (2014).
  10. J. C. Bridgeman and C. T. Chubb, Hand-waving and interpretive dance: an introductory course on tensor networks, Journal of physics A: Mathematical and theoretical 50, 223001 (2017).
  11. H. Sompolinsky, A. Crisanti, and H.-J. Sommers, Chaos in random neural networks, Physical review letters 61, 259 (1988).
  12. J. Aljadeff, M. Stern, and T. Sharpee, Transition to chaos in random networks with cell-type-specific connectivity, Physical review letters 114, 088101 (2015).
  13. R. Srinath, D. A. Ruff, and M. R. Cohen, Attention improves information flow between neuronal populations without changing the communication subspace, Current Biology 31, 5299 (2021).
  14. A. M. Turing, The chemical basis of morphogenesis, Bulletin of mathematical biology 52, 153 (1990).
  15. J. P. Cunningham and B. M. Yu, Dimensionality reduction for large-scale neural recordings, Nature neuroscience 17, 1500 (2014).
  16. J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities., Proceedings of the national academy of sciences 79, 2554 (1982).
  17. R. Ben-Yishai, R. L. Bar-Or, and H. Sompolinsky, Theory of orientation tuning in visual cortex., Proceedings of the National Academy of Sciences 92, 3844 (1995).
  18. Y. Burak and I. R. Fiete, Accurate path integration in continuous attractor network models of grid cells, PLoS computational biology 5, e1000291 (2009).
  19. K. Fukuda, cdd/cdd+ reference manual, Institute for Operations Research, ETH-Zentrum , 91 (1997).
  20. L. Van der Maaten and G. Hinton, Visualizing data using t-sne., Journal of machine learning research 9 (2008).
  21. A. Valente, J. Pillow, and S. Ostojic, Extracting computational mechanisms from neural activity with low-rank networks, Neur Inf Proc Sys  (2022).
  22. L. Cimeša, L. Ciric, and S. Ostojic, Geometry of population activity in spiking networks with low-rank structure, PLoS Computational Biology 19, e1011315 (2023).
  23. Y. Shao and S. Ostojic, Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks, PLOS Computational Biology 19, e1010855 (2023).

Summary

  • The paper demonstrates that multiregion networks combine random local dynamics with structured low-rank connectivity to enable selective signal transmission.
  • It reveals a trade-off where regions with high-dimensional, complex activity show diminished effectiveness in routing signals.
  • The study links asymmetric connectivity's spectral properties to emergent dynamic attractors, providing actionable insights for neural data analysis and disease modeling.

Unraveling the Dynamics of Multiregion Neural Networks: A Study in Structure and Communication

Introduction to Multiregion Recurrent Neural Networks

Neural circuits in the brain are architecturally complex systems composed of various regions, each performing distinct computational tasks yet interconnected to function cohesively. Advances in neural recording technologies have unveiled the intricacies of neural activity patterns across these regions, sparking interest in understanding the underlying dynamics and connectivity patterns. This study explores the dynamics of multiregion recurrent neural networks, focusing on the balance between within-region disorder and structured, low-rank connectivity across regions. By modeling these networks, we provide insights into how local and global dynamics interact, leading to selective routing of neural activity across regions.

Key Findings and Theoretical Insights

Our model of multiregion networks posits a blend of random and structured connectivity within regions and structured, low-rank connectivity across regions. This architecture mimics the experimental evidence of communication subspaces, underscoring the selection of specific neural activity patterns for transmission between brain areas. Analyzing this framework, we demonstrate how these networks exhibit dual dynamics: chaotic fluctuations governed by disordered local connections and low-dimensional, stable signal transmission facilitated by structured interregional links.

Crucially, we uncover a fundamental trade-off between a region's ability to generate complex, high-dimensional activity and its capacity to transmit signals. Regions embroiled in intricate local dynamics are less adept at routing information, necessitating a taming of the chaos for effective communication. Our study frames this phenomenon not as a suppression of activity, but rather as meticulous excitement of distinct neural modes fashioned by the network's connectivity matrix. This shift in perspective from neuronal to geometric view suggests a novel mechanism for neural circuit modulation.

Delving into the field of asymmetric interactions, we explore scenarios free from the symmetry constraints on connectivity. This approach reveals the emergence of dynamic attractors, including limit cycles, hinting at even richer behavioral patterns in neural circuits. These findings are tethered to the leading eigenvalues of the connectivity matrix, linking the spectral properties to the observed network dynamics.

Implications for Neuroscience and Neural Data Analysis

Our theoretical exploration has significant implications for understanding brain function and neural data analysis:

  1. Cognitive Flexibility: The capacity for switching between network-wide stable states and dynamic patterns might underpin the brain's flexibility in task switching and cognitive processing. Adjusting the balance of connectivity can facilitate transitions between discrete memory states and continuous information processing modes.
  2. Neural Data Interpretation: Our model aligns with current methods for analyzing multiregion neural recordings. The identification of currents as key dynamical variables suggests a promising avenue for dissecting neural recordings into intelligible components, facilitating a deeper understanding of intra- and inter-regional communication.
  3. Disease Modeling: Alterations in the structured connectivity of neural circuits could provide insights into the neural basis of certain neurological disorders characterized by disrupted communication, such as schizophrenia or autism spectrum disorder.
  4. Connectome Analysis: For datasets with predefined or to-be-determined regions, our model offers a framework for approximating neuronal dynamics based on the observed connectivity, potentially bridging the gap between structural connectome data and functional neural activity.

Concluding Thoughts

This study presents a comprehensive analysis of multiregion recurrent neural networks through the lens of sophisticated mathematical modeling. By highlighting the crucial role of structured connectivity in shaping neural dynamics, it underscores the delicate balance neural circuits must maintain to support the diverse computational demands placed on them. The insights gleaned from this work not only broaden our understanding of neural circuitry but also pave the way for novel approaches to analyzing and interpreting complex neural data.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 7 tweets with 159 likes about this paper.