BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240626T180002Z
LOCATION:3004\, 3rd Floor
DTSTART;TZID=America/Los_Angeles:20240626T153000
DTEND;TZID=America/Los_Angeles:20240626T173000
UID:dac_DAC 2024_sess119@linklings.com
SUMMARY:Memories Are Smarter than Ever Before
DESCRIPTION:Research Manuscript\n\nProcessing-in-memory (PIM) has been ext
 ensively studied in the last few years, but memories never stop evolving, 
 providing more processing power and functions. The first paper proposes a 
 ReRAM-based PIM architecture for accelerating sparse conjugate gradient so
 lvers, followed by the second paper introducing an efficient MAC scheme fo
 r MRAM-based PIM. In the next two papers, emerging devices are employed as
  a key computing component in the system, demonstrating its effectiveness.
  Finally, the last three papers discuss the various design approaches to i
 mprove SRAM-based PIM hardware, ranging from circuits to modeling.\n\nReCG
 : ReRAM-Accelerated Sparse Conjugate Gradient\n\nSolving sparse linear sys
 tems is crucial in scientific computing. Sparse Conjugate Gradient (CG) is
  one of the most popular iterative solvers with high efficiency and low st
 orage requirements. However, the performance of sparse CG solvers implemen
 ted on storage-compute separated architectures is gre...\n\n\nMingjia Fan 
 (Super Scientific Software Laboratory, China University of Petroleum-Beiji
 ng); Xiaoming Chen (Institute of Computing Technology, Chinese Academy of 
 Sciences); and Dechuang Yang, Zhou Jin, and Weifeng Liu (Super Scientific 
 Software Laboratory, China University of Petroleum-Beijing)\n-------------
 --------\nTowards Efficient SRAM-PIM Architecture Design by Exploiting Uns
 tructured Bit-Level Sparsity\n\nBit-level sparsity in neural network model
 s harbors immense untapped potential. \nEliminating redundant calculations
  of randomly distributed zero-bits significantly boosts computational effi
 ciency. \nYet, traditional digital SRAM-PIM architecture, limited by rigid
  crossbar architecture, struggles to e...\n\n\nCenlin Duan, Jianlei Yang, 
 Yiou Wang, Yikun Wang, Yingjie Qi, and Xiaolin He (Beihang University); Bo
 nan Yan (Peking University); and Xueyan Wang, Xiaotao Jia, and Weisheng Zh
 ao (Beihang University)\n---------------------\nOPTIMA: Design-Space Explo
 ration of Discharge-Based In-SRAM Computing: Quantifying Energy-Accuracy T
 rade-offs\n\nIn-SRAM computing promises energy efficiency, but circuit non
 linearities and PVT variations pose major challenges in designing robust a
 ccelerators. To address this, we introduce OPTIMA, a modeling framework th
 at aids in analyzing bit-line discharge and power consumption in 6T-SRAM-b
 ased accelerators...\n\n\nSaeed Seyedfaraji, Severin Jäger, Salar Shakibha
 medan, Asad Aftab, and Semeen Rehman (Technische Universität Wien)\n------
 ---------------\nFRM-CIM: Full-Digital Recursive MAC Computing in Memory S
 ystem Based on MRAM for Neural Network Applications\n\nComputing in memory
  (CIM) realizes energy-efficient neural network algorithms by implementing
  highly parallel multiply-and-accumulate (MAC) operation. However, the MAC
  delay of CIM will sharply increase with the improvement of computing prec
 ision, which restricts its development. In this work, we pr...\n\n\nJinkai
  Wang, Zekun Wang, Bojun Zhang, Zhengkun Gu, Youxiang Chen, Weisheng Zhao,
  and Yue Zhang (Beihang University)\n---------------------\nAn In-Memory C
 omputing Accelerator with Reconfigurable Dataflow for Multi-Scale Vision T
 ransformer with Hybrid Topology\n\nTransformer models equipped with multi-
 head attention (MHA) mechanism have demonstrated promise in computer visio
 n tasks, i.e., vision transformers (ViTs). Nevertheless, the lack of induc
 tive bias in ViTs leads to substantial computational and storage requireme
 nts, hindering their deployment on reso...\n\n\nZhiyuan Chen, Yufei Ma, Ke
 yi Li, Yifan Jia, Guoxiang Li, Meng Wu, Tianyu Jia, Le Ye, and Ru Huang (P
 eking University)\n---------------------\nToward High-Accuracy, Programmab
 le Extreme-Edge Intelligence for Neuromorphic Vision Sensors utilizing Mag
 netic Domain Wall Motion-based MTJ\n\nThe desire to empower resource-limit
 ed edge devices with computer vision (CV) must overcome the high energy co
 nsumption of collecting and processing vast sensory data. To address the c
 hallenge, this work proposes an energy-efficient non-von-Neumann in-pixel 
 processing solution for neuromorphic visio...\n\n\nMd Abdullah-Al Kaiser, 
 Gourav Datta, and Peter Beerel (University of Southern California) and Akh
 ilesh Jaiswal (University of Wisconsin, Madison)\n---------------------\nC
 ross-Layer Exploration and Chip Demonstration of In-Sensor Computing for L
 arge-Area Applications with Differential-Frame ROM-Based Compute-In-Memory
 \n\nIn-sensor computing has emerged as a promising approach to mitigating 
 huge data transmission costs between sensors and processing units. Recentl
 y, the emerging application scenarios have raised more demands of sensory 
 technology for large-area and flexible integration. However, with thin-fil
 m techno...\n\n\nJialong Liu, Wenjun Tang, Deyun Chen, Chen Jiang, Huazhon
 g Yang, and Xueqing Li (Tsinghua University)\n\nTopic: Design\n\nKeyword: 
 In-memory and Near-memory Computing Architectures, Applications and System
 s\n\nSession Chairs: Peipei Zhou (University of Pittsburgh) and Xueqing Li
  (Tsinghua University)
END:VEVENT
END:VCALENDAR
