Close

Presentation

A Near-data Processing Architecture for GNN Training and Inference Acceleration
DescriptionGraph Neural Networks (GNNs) demand extensive fine-grained memory access, which leads to the inefficient use of bandwidth resources. This issue is more serious when dealing with large-scale graph training tasks. Near-data processing emerges as a promising solution for data-intensive computation tasks; however, existing GNN acceleration architectures do not integrate the near-data processing approach. To address this gap, we conduct a comprehensive analysis of GNN operation characteristics, taking into consideration the requirements for accelerating aggregation and combination processes. In this paper, we introduce a near-data processing architecture tailored for GNN acceleration, named NDPGNN. NDPGNN offers different operational modes, catering to the acceleration needs of various GNN frameworks, while ensuring system configurability and scalability. In comparison to previous approaches, NDPGNN brings 5.68x improvement in system performance while reducing 8.49× energy consumption overhead.
Event Type
Work-in-Progress Poster
TimeWednesday, June 265:00pm - 6:00pm PDT
LocationLevel 2 Lobby
Topics
AI
Autonomous Systems
Cloud
Design
EDA
Embedded Systems
IP
Security