BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240626T180035Z
LOCATION:3003\, 3rd Floor
DTSTART;TZID=America/Los_Angeles:20240626T114500
DTEND;TZID=America/Los_Angeles:20240626T120000
UID:dac_DAC 2024_sess118_RESEARCH991@linklings.com
SUMMARY:Efficient Memory Integration: MRAM-SRAM Hybrid Accelerator for Spa
 rse On-Device Learning
DESCRIPTION:Research Manuscript\n\nFan Zhang (Johns Hopkins University), A
 mitesh Sridharan (Arizona State University), Wilman tsai (Stanford Univers
 ity), Yiran Chen (Duke University), Shan Wang (Stanford University), and D
 eliang Fan (Johns Hopkins University)\n\nWith the prosperous development o
 f Deep Neural Network (DNNs), numerous Process-In-Memory (PIM) designs hav
 e emerged to accelerate DNN models with exceptional throughput and energy-
 efficiency.  PIM accelerators based on Non-Volatile Memory (NVM) or volati
 le memory offer distinct advantages for computational efficiency and perfo
 rmance. NVM based PIM accelerators, demonstrated success in DNN inference,
  face limitations in on-device learning due to high write energy, latency,
  and instability. Conversely, fast volatile memories, like SRAM, offer rap
 id read/write operations for DNN training, but suffer from significant lea
 kage currents and large memory footprints. In this paper, for the first ti
 me, we present a fully-digital sparse processing in hybrid NVM-SRAM design
 , synergistically combines the strengths of NVM and SRAM, tailored for on-
 device continual learning. Our designed NVM and SRAM based PIM circuit mac
 ros could support both storage and processing of N:M structured sparsity p
 attern, significantly improving the storage and computing efficiency. Exha
 ustive experiments demonstrate that our hybrid system effectively reduces 
 area and power consumption while maintaining high accuracy, offering a sca
 lable and versatile solution for on-device continual learning.\n\nTopic: D
 esign\n\nKeyword: In-memory and Near-memory Computing Architectures, Appli
 cations and Systems\n\nSession Chairs: Seokhyeong Kang (Pohang University 
 of Science and Technology (POSTECH)) and Giacomo Pedretti (Hewlett Packard
  Enterprise)
END:VEVENT
END:VCALENDAR
