Close

Presentation

Shared-PIM: Enabling Concurrent Computation and Data Flow for Faster Processing-in-Memory
DescriptionProcessing-in-Memory (PIM) enhances memory with computational capabilities, potentially solving energy and latency issues tied to data transfer between memory and processors. However, managing concurrent computation and data movement in PIM is challenging. This paper introduces Shared-PIM, an architecture for in-DRAM PIM that strategically allocates rows in memory banks, bolstered by memory peripherals, for concurrent processing and data flow. Shared-PIM enables simultaneous computation and data transfer within a memory bank. When compared to LISA, a state-of-the-art architecture that facilitates data transfers for in-DRAM PIM, Shared-PIM reduces copy latency and energy by 5x and 1.2x respectively. Furthermore, when integrated to a state-of-the-art (SOTA) in-DRAM PIM architecture (pLUTo), Shared-PIM achieves 1.4x faster addition and multiplication, and thereby improves the performance of CNN, FFT, and BFS tasks by 1.3x, 1.27x and 1.7x respectively, with an area overhead of just 7.16%.
Event Type
Work-in-Progress Poster
TimeWednesday, June 265:00pm - 6:00pm PDT
LocationLevel 2 Lobby
Topics
AI
Autonomous Systems
Cloud
Design
EDA
Embedded Systems
IP
Security