Close

Presentation

Energy-efficient SNN Architecture using 3nm FinFET Multiport SRAM-based CIM with Online Learning
DescriptionThere is an increasing demand for ultra-low power in Edge AI devices, such as smartphones, wearables, and Internet-of-Things sensor systems, with constrained battery budgets. Current AI computation units face challenges, primarily from the memory-wall issue, limiting overall system-level performance. In this paper, we propose a new SRAM-based Compute-In-Memory (CIM) accelerator optimized for Spiking Neural Networks (SNNs) inference. Our proposed architecture employs a multiport SRAM design with multiple decoupled read ports to enhance the throughput and transposable read-write ports to facilitate online learning. Furthermore, we develop an Arbiter circuit for efficient data processing and port allocations during the computation. Results for a 128x128 array in 3nm FinFET technology demonstrate a 3.1x improvement in speed and a 2.2x enhancement in energy efficiency with our 5R1W SRAM design compared to the traditional single-port SRAM design. At the system level, a throughput of 44 MInf/s at 607 pJ/Inf and 29 mW is achieved.
Event Type
Research Manuscript
TimeTuesday, June 254:30pm - 4:45pm PDT
Location3004, 3rd Floor
Topics
AI
Design
Keywords
AI/ML, Digital, and Analog Circuits