CIMulator: A Comprehensive Simulation Platform for Computing-In-Memory Circuit Macros with Low Bit-Width and Real Memory Materials
Abstract: This paper presents a simulation platform, namely CIMulator, for quantifying the efficacy of various synaptic devices in neuromorphic accelerators for different neural network architectures. Nonvolatile memory devices, such as resistive random-access memory, ferroelectric field-effect transistor, and volatile static random-access memory devices, can be selected as synaptic devices. A multilayer perceptron and convolutional neural networks (CNNs), such as LeNet-5, VGG-16, and a custom CNN named C4W-1, are simulated to evaluate the effects of these synaptic devices on the training and inference outcomes. The dataset used in the simulations are MNIST, CIFAR-10, and a white blood cell dataset. By applying batch normalization and appropriate optimizers in the training phase, neuromorphic systems with very low-bit-width or binary weights could achieve high pattern recognition rates that approach software-based CNN accuracy. We also introduce spiking neural networks with RRAM-based synaptic devices for the recognition of MNIST handwritten digits.
- S. Yu, X. Sun, X. Peng and S. Huang, “Compute-in-memory with emerging nonvolatile-memories: Challenges and prospects,” Proc. 2020 IEEE Custom Integrated Circuits Conference (CICC), pp. 1-4, 2020, DOI: 10.1109/CICC48029.2020.9075887
- S. Kala, B. R. Jose, J. Mathew and S. Nalesh, “Efficient Hardware Acceleration of Convolutional Neural Networks,” 2019 32nd IEEE International System-on-Chip Conference (SOCC), 2019, pp. 191-192, DOI: 10.1109/SOCC46988.2019.1570573948
- A. Ghanbari, M. Modarressi, “Energy-efficient acceleration of convolutional neural networks using computation reuse,” Journal of Systems Architecture, Volume 126, 2022, 102490, ISSN 1383-7621, DOI: 10.1016/j.sysarc.2022.102490
- Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems, pp. 1097-1105, 2012, DOI: 10.1145/3065386
- Y.-H. Chen, T. Krishna, J. S. Emer and V. Sze, “Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks,” IEEE. J. Solid State Circuits, vol. 52, pp. 127-138, 2017, DOI: 10.1109/JSSC.2016.2616357
- M. Hu et at., “Dot-product Engine for Neuromorphic Computing: Programming 1T1M Crossbar to Accelerate Matrix-vector Multiplication,” 2016 53nd𝑛𝑑{}^{nd}start_FLOATSUPERSCRIPT italic_n italic_d end_FLOATSUPERSCRIPT ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, USA, 2016, pp. 1-6, DOI: 10.1145/2897937.2898010.
- Xiaochen Peng, Shanshi Huang, Hongwu Jiang, Anni Lu, Shimeng Yu, “SIAM: Scalable In-Memory Acceleration With Mesh,”. Available online: https://github.com/gkrish19/SIAM
- Pai-Yu Chen, Xiaochen Peng, Shimeng Yu, “NeuroSim: A Circuit-Level Macro Model for Benchmarking Neuro-Inspired Architectures in Online Learning,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 12, pp. 3067 - 3080, December 2018, DOI: 10.1109/TCAD.2018.2789723.
- Xiaoche Peng, Shanshi Huang, Hongwu Jiang, Anni Lu, Shimeng Yu, “DNN+NeuroSim V2.0: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-chip Training,” arXiv: 2003.06471.
- M. Hu, H. Li, Q. Wu, G. S. Rose and Y. Chen, “Memristor crossbar based hardware realization of BSB recall function,” The 2012 International Joint Conference on Neural Networks (IJCNN), 2012, pp. 1-7, DOI: 10.1109/IJCNN.2012.6252563.
- Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio, “Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1,” arXiv: 1602.02830v3
- D. D. Lu, F. -X. Liang, Y. -C. Wang and H. -K. Zeng, “NVMLearn: A simulation platform for non-volatile-memory-based deep learning hardware,” 2017 International Conference on Applied System Innovation (ICASI), 2017, pp. 66-69, DOI: 10.1109/ICASI.2017.7988347.
- Wei-Chen Hung (2019). A Deep Learning Simulation Platform for Non-Volatile Memory-Based Analog neuromorphic array [Master’s thesis]. National Cheng Kung University.
- The Cifar-10 dataset. Available online: https://www.cs.toronto.edu/ kriz/cifar.html
- The white blood cells dataset.Available online: https://www.kaggle.com/paultimothymooney/blood-cells
- Diederik P. Kingma, Jimmy Ba, “Adam: A Method for Stochastic Optimization,” arXiv: 1412.6980
- Geoffrey Hinton, “Neural Networks for Machine Learning,” online course: https://www.coursera.org/learn/neural-networks/home/welcome
- S. Truong and K.-S. Min, “New memristor-based crossbar array architecture with 50%percent\%% area reduction and 48%percent\%% power saving for matrix vector multiplication of analog neuromorphic computing,” JSTS:Journal of Semiconductor Technology and Science, vol. 14, pp. 356–363, 2014, DOI: 10.5573/JSTS.2014.14.3.356
- S. Yu, “Neuro-inspired computing with emerging nonvolatile memory,” in Proceedings of the IEEE, vol. 106, no. 2, pp. 260-285, Feb. 2018, DOI: 10.1109/JPROC.2018.2790840.
- ITRS Emerging Research Devices Chapter, 2013. Available online at: ITRS 2013
- M. Prezioso, F. Merrikh-Bayat, B. D. Hoskins, G. C. Adam, K. K. Likharev, and D. B. Strukov, “Training and operation of an integrated neuromorphic network based on metal-oxide memristors,” Nature, vol. 521, pp. 61-64, May 2015. url: https://www.nature.com/articles/nature14441
- Z. He, S. Angizi and D. Fan, “Exploring STT-MRAM Based In-Memory Computing Paradigm with Application of Image Edge Extraction,” 2017 IEEE International Conference on Computer Design (ICCD), 2017, pp. 439-446, DOI: 10.1109/ICCD.2017.78
- An Chen, “A review of emerging non-volatile memory (NVM) technologies and applications,” Solid-State Electronics, vol. 125, pp. 25-38, November 2016. DOI: 10.1016/j.sse.2016.07.006
- D. D. Lu, S. De, Md. A. Baig, B. H. Qiu, and Y. J. Lee, “Computationally Efficient Compact Model for Ferroelectric Field-Effect Transistors to Simulate the Online Training of Neural Networks,” Semiconductor Science and Technology, vol. 35, num. 9, July 2020, url: https://iopscience.iop.org/article/10.1088/1361-6641/ab9bed
- X. Lyu, M. Si, P. R. Shrestha, K. P. Cheung, and P. D. Ye, “First Direct Measurement of Sub-Nanosecond Polarization Switching in Ferroelectric Hafnium Zirconium Oxide,” 2019 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, USA, 2019, pp. 15.2.1-15.2.4, DOI: 10.1109/IEDM19573.2019.8993509.
- H. Jiang, X. Peng, S. Huang and S. Yu, “CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays,” IEEE Transactions on Computers, vol. 69, no. 7, pp. 944-954, 1 July 2020, DOI: 10.1109/TC.2020.2980533.
- K. Simonyan, A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” arXiv: 1409.1556v6
- The ImageNet dataset. Available online: https://www.image-net.org/update-mar-11-2021.php
- PyTorch model and pre-trained weights. Available online: https://pytorch.org/vision/stable/models.html
- Guo-qiang Bi, Mu-ming Poo, “Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type,” Journal of neuroscience, vol. 18, issue 24, 15 Dec 1998, DOI: 10.1523/JNEUROSCI.18-24-10464.1998
- I-Ting Huang (2021). A Simulation Framework for STDP-Oriented Unsupervised Learning on Resistive Random Access Memory based Spiking Neural Network [Master’s thesis]. National Cheng Kung University.
- S. Ambrogio, Simone Balatti, F. Nardi, Stefano Facchinetti, D. Ielmini, “Spike-timing dependent plasticity in a transistor-selected resistive switching memory,” Nanotechnology, vol. 24, no. 38, September 2013, DOI: 10.1088/0957-4484/24/38/384012
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.