Learning Hierarchically Structured Concepts
Abstract: We study the question of how concepts that have structure get represented in the brain. Specifically, we introduce a model for hierarchically structured concepts and we show how a biologically plausible neural network can recognize these concepts, and how it can learn them in the first place. Our main goal is to introduce a general framework for these tasks and prove formally how both (recognition and learning) can be achieved. We show that both tasks can be accomplished even in presence of noise. For learning, we analyze Oja's rule formally, a well-known biologically-plausible rule for adjusting the weights of synapses. We complement the learning results with lower bounds asserting that, in order to recognize concepts of a certain hierarchical depth, neural networks must have a corresponding number of layers.
- Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex. Nature, 347(6288):69, 1990.
- Long-term depression of excitatory synaptic transmission and its relationship to long-term potentiation. Trends in neurosciences, 16(11):480–487, 1993.
- Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience, 2(1):32–48, 1982.
- A cortical model of winner-take-all competition via lateral inhibition. Neural Networks, 5(1):47–54, 1992.
- Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, 2009.
- Combining visual attention, object recognition and associative information processing in a neurobotic system. In Biomimetic neural learning for intelligent robots, pages 118–143. Springer, 2005.
- Distributed hierarchical processing in the primate cerebral cortex. Cerebral cortex (New York, NY: 1991), 1(1):1–47, 1991.
- Adaptive network for optimal linear feature extraction. 1989.
- Spiking neuron models: Single neurons, populations, plasticity. Cambridge University Press, 2002.
- Eligibility traces and plasticity on behavioral time scales: experimental support of neohebbian three-factor learning rules. Frontiers in neural circuits, 12:53, 2018.
- Stochastic computations in cortical microcircuit models. PLoS Computational Biology, 9(11):e1003311, 2013.
- D. O. Hebb. The Organization of Behavior. Wiley and Sons, New York, 1949.
- Random sketching, clustering, and short-term memory in spiking neural networks. In 11th Innovations in Theoretical Computer Science Conference, ITCS 2020, January 12-14, 2020, Seattle, Washington, USA, pages 23:1–23:31, 2020. URL: https://doi.org/10.4230/LIPIcs.ITCS.2020.23, doi:10.4230/LIPIcs.ITCS.2020.23.
- D. Hubel and T. Wiesel. Receptive fields, binocular interaction, and functional architecture in the cat’s visual cortex. Journal of Physiology, 160:106–154, 1962.
- Receptive fields of single neurones in the cat’s striate cortex. The Journal of Physiology, 148(3):574–591, 1959. URL: https://physoc.onlinelibrary.wiley.com/doi/abs/10.1113/jphysiol.1959.sp006308, arXiv:https://physoc.onlinelibrary.wiley.com/doi/pdf/10.1113/jphysiol.1959.sp006308, doi:10.1113/jphysiol.1959.sp006308.
- Cortical feedback improves discrimination between figure and background by v1, v2 and v3 neurons. Nature, 394(6695):784, 1998.
- Eugene M. Izhikevich. Which model to use for cortical spiking neurons? IEEE Transactions on Neural Networks, 15(5):1063–1070, 2004.
- Hebbian learning and spiking neurons. Physical Review E, 59(4):4498, 1999.
- An associative cortical model of language understanding and action planning. In International work-conference on the interplay between natural and artificial computation, pages 405–414. Springer, 2005.
- Memory capacities for synaptic and structural plasticity. Neural Computation, 22(2):289–341, 2010.
- Winner-take-all networks of o(n)𝑜𝑛o(n)italic_o ( italic_n ) complexity. Technical report, DTIC Document, 1988.
- Long term memory and the densest k𝑘kitalic_k-subgraph problem. In 9th Innovations in Theoretical Computer Science (ITCS 2018), pages 57:1–57:15, Cambridge, MA, January 2018.
- S. Lowel and W. Singer. Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity. Science Magazine, 255(5041):209–212, January 1992.
- Computational tradeoffs in biological neural networks: Self-stabilizing winner-take-all networks. In ITCS 2017, 2017. Full version available at https://arxiv.org/abs/1610.02084.
- Winner-take-all computation in spiking neural networks, April 2019. arXiv:1904.12591.
- Learning hierarchically structured concepts. CoRR, abs/1909.04559v3, 2019. URL: http://arxiv.org/abs/1909.04559v3, arXiv:1909.04559v3.
- A basic compositional model for spiking neural networks. CoRR, abs/1808.03884, 2018. URL: http://arxiv.org/abs/1808.03884, arXiv:1808.03884.
- Computational tradeoffs in biological neural networks: Self-stabilizing winner-take-all networks. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, pages 15:1–15:44, 2017. URL: https://doi.org/10.4230/LIPIcs.ITCS.2017.15, doi:10.4230/LIPIcs.ITCS.2017.15.
- Wolfgang Maass. On the computational power of noisy spiking neurons. In NIPS1996, 1996.
- Wolfgang Maass. Networks of spiking neurons: the third generation of neural network models. Neural Networks, 10(9):1659–1671, 1997.
- Wolfgang Maass. Neural computation with winner-take-all as the only nonlinear operation. In NIPS 1999, pages 293–299, 1999.
- Wolfgang Maass. On the computational power of winner-take-all. Neural Computation, 2000.
- Anatomy of hierarchy: feedforward and feedback pathways in macaque visual cortex. Journal of Comparative Neurology, 522(1):225–259, 2014.
- Learning functions: when is deep better than shallow. arXiv preprint arXiv:1603.00988, 2016.
- Erkki Oja. Simplified neuron model as a principal component analyzer. Journal of mathematical biology, 15(3):267–273, 1982.
- Erkki Oja. Principal components, minor components, and linear neural networks. Neural networks, 5(6):927–935, 1992.
- Spiking inputs to a winner-take-all network. In NIPS 2006, page 1051, 2006.
- Günther Palm. Neural assemblies: An alternative approach to artificial intelligence, volume 7. Springer Science & Business Media, 2012.
- Cell assemblies in the cerebral cortex. Biological cybernetics, 108(5):559–572, 2014.
- Cortical learning via prediction. Proceedings of Machine Learning Research (PMLR), 40:1402–1422, 2015.
- Random projection in the brain and computation with assemblies of neurons. In 10th Innovation in Theoretical Computer Science (ITCS 2019), pages 57:1–57:19, San Diego, CA, January 2019.
- Brain computation by assemblies of neurons, December 2019. bioRxiv:10.1101/869156v1.
- Hierarchical models of object recognition in cortex. Nature neuroscience, 2(11):1019–1025, 1999.
- A compositionality machine realized by a hierarchic architecture of synfire chains. Frontiers in Computational Neuroscience, 4:154, 2011. URL: https://www.frontiersin.org/article/10.3389/fncom.2010.00154, doi:10.3389/fncom.2010.00154.
- Lili Su and Chia-Jung Chang amd Nancy Lynch. Spike-based winner-take-all computation: Fundamental limits and order-optimal circuits. Neural Computation, 31(12), December 2019. Published online. Also, arXiv:1904.10399.
- Simon J. Thorpe. Spike arrival times: A highly efficient coding scheme for neural networks. Parallel Processing in Neural Systems, pages 91–94, 1990.
- Leslie G. Valiant. Circuits of the Mind. Oxford University Press on Demand, 2000.
- Leslie G. Valiant. A neuroidal architecture for cognitive computation. Journal of the ACM (JACM), 47(5):854–882, 2000.
- Leslie G. Valiant. Memorization and association on a realistic neural model. Neural Computation, 17(3):527–555, 2005.
- Leslie G. Valiant. The hippocampus as a stable memory allocator for cortex. Neural Computation, 24(11):2873–2899, 2012.
- k𝑘kitalic_k-winners-take-all computation with neural oscillators. arXiv preprint q-bio/0401001, 2003.
- Non-holographic associative memory. Nature, 222(5197):960–962, 1969.
- A winner-take-all mechanism based on presynaptic inhibition feedback. Neural Computation, 1(3):334–347, 1989.
- Visualizing and understanding convolutional networks. In David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision – ECCV 2014, pages 818–833, Cham, 2014. Springer International Publishing.
- Interpreting deep visual representations via network dissection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(9):2131–2145, September 2019.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.