Neural Echos: Depthwise Convolutional Filters Replicate Biological Receptive Fields
Abstract: In this study, we present evidence suggesting that depthwise convolutional kernels are effectively replicating the structural intricacies of the biological receptive fields observed in the mammalian retina. We provide analytics of trained kernels from various state-of-the-art models substantiating this evidence. Inspired by this intriguing discovery, we propose an initialization scheme that draws inspiration from the biological receptive fields. Experimental analysis of the ImageNet dataset with multiple CNN architectures featuring depthwise convolutions reveals a marked enhancement in the accuracy of the learned model when initialized with biologically derived weights. This underlies the potential for biologically inspired computational models to further our understanding of vision processing systems and to improve the efficacy of convolutional networks.
- How to initialize your network? robust initialization for weightnorm & resnets. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- On-off center-surround receptive fields for accurate and robust image classification. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 478–489. PMLR, 18–24 Jul 2021.
- M.R. Blackburn. A Simple Computational Model of Center-Surround Receptive Fields in the Retina. Technical Report 2454, Ocean Surveillance Center, Feb 1993.
- Turing centenary: Is the brain a good model for machine intelligence? Nature, 482:462–3, 02 2012.
- Randaugment: Practical automated data augmentation with a reduced search space, 2019.
- Imagenet: A Large-Scale Hierarchical Image Database. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 248–255, Miami, Florida, USA, June 2009. IEEE Computer Society.
- Adam: A Method for Stochastic Optimization. In Y. Bengio and Y. LeCun, editors, 3rd International Conference on Learning Representations, San Diego, CA, USA, May 2015.
- An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
- Properties of the Surround Response Mechanism of Cat Retinal Ganglion Cells and Centre-Surround Interaction. The Journal of Physiology, 220(2):403–439, Jan 1972.
- K. Fukushima. Neocognitron for Handwritten Digit Recognition. Journal of Neurocomputing, 51:161–180, 2003.
- Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR.
- How to start training: The effect of initialization and architecture, 2018.
- A k-means clustering algorithm. JSTOR: Applied Statistics, 28(1):100–108, 1979.
- H. K. Hartline. The receptive fields of optic nerve fibers. American Journal of Physiology-Legacy Content, 130(4):690–699, 1940.
- Surround modulation: A bio-inspired connectivity structure for convolutional neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
- Neuroscience-inspired artificial intelligence. Neuron, 95(2):245–258, 2017.
- Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, 2015.
- Deep Residual Learning for Image Recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR ’16, pages 770–778. IEEE, June 2016.
- Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
- Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
- Densely connected convolutional networks. In CVPR, pages 2261–2269. IEEE Computer Society, 2017.
- Receptive fields and functional architecture of monkey striate cortex. The Journal of Physiology, 195(1):215–243, 1968.
- Structured receptive fields in cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2610–2619, 2016.
- Principles of Neural Science. Fifth Edition. McGraw-Hill Medical / Education, 2013.
- Convolutional neural network with biologically inspired retinal structure. Procedia Computer Science, 88:145–154, 2016.
- Cifar-10 (canadian institute for advanced research).
- Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012.
- Computational Model of Dot-Pattern Selective Cells. Biological Cybernetics, 83(4):313–325, Jun 2000.
- S.W. Kuffler. Discharge Patterns and Functional Organization of Mammalian Retina. Journal of Neurophysiology, 16(1):37–68, 1953.
- Correspondence of deep neural networks and the brain for visual textures. arXiv preprint arXiv:1806.02888, 2018.
- Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989.
- A convnet for the 2020s. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
- All you need is a good init, 2016.
- Modifications of Center-Surround, Spot Detection and Dot-Pattern Selective Operators. Technical Report 2005-9-01, Institute of Mathematics and Computing Science, University of Groningen, Netherlands, 2005.
- Hornet: Efficient high-order spatial interactions with recursive gated convolutions. Advances in Neural Information Processing Systems (NeurIPS), 2022.
- R. Rodieck. Quantitative Analysis of Cat Retinal Ganglion Cell Response to Visual Stimuli. Vision Research, 5(12):583–601, 1965.
- Cat and monkey retinal ganglion cells and their visual functional roles. Trends in Neurosciences, 9:229 – 235, 1986.
- Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
- EfficientNet: Rethinking model scaling for convolutional neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 6105–6114. PMLR, 09–15 Jun 2019.
- Alex Thomson. Neocortical layer 6, a review. Frontiers in Neuroanatomy, 4:13, 2010.
- Patches are all you need? CoRR, abs/2201.09792, 2022.
- Convnext v2: Co-designing and scaling convnets with masked autoencoders. arXiv preprint arXiv:2301.00808, 2023.
- Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
- Different circuits for on and off retinal ganglion cells cause different contrast sensitivities. Journal of Neuroscience, 23(7):2645–2654, 2003.
- mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
- Random erasing data augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07):13001–13008, Apr. 2020.
- Non-linear convolution filters for cnn-based learning. In Proceedings of the IEEE International Conference on Computer Vision, pages 4761–4769, 2017.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.