BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240626T180033Z
LOCATION:3003\, 3rd Floor
DTSTART;TZID=America/Los_Angeles:20240625T111500
DTEND;TZID=America/Los_Angeles:20240625T113000
UID:dac_DAC 2024_sess120_RESEARCH911@linklings.com
SUMMARY:SC-GNN: A Communication-Efficient Semantic Compression for Distrib
 uted Training of GNNs
DESCRIPTION:Research Manuscript\n\nJihe Wang, Ying Wu, and Danghui Wang (N
 orthwestern Polytechnical University)\n\nTraining big graph neural network
 s (GNNs) in distributed systems is quite time-consuming mainly because of 
 the ubiquitous aggregate operations that involve a large amount of cross-p
 artition communication for collecting embeddings/gradients during the forw
 ard and backward propagations. To reduce the volume of the communication, 
 some recent approaches focused on decaying each of connections via samplin
 g, quantifying, or delaying until satisfactory trade-off are obtained betw
 een volume and accuracy. However, when applied to popular GNNs, those appr
 oaches are found to be bounded by a common volume/accuracy Pareto frontier
  which shows that the decaying for individual connection cannot further ac
 celerate the aggregate of training. In this work, SC-GNN, a semantic compr
 ession of the cross-partition communication, is proposed to concentrate a 
 group of connections as a high-level semantics and transmit to a target pa
 rtition. Since carrying the overall intent of a group, the semantics can k
 eep transferring the interactions, i.e., embeddings/gradients, between a p
 air of remote partitions until GNN models converge. In addition, a connect
 ion-pattern based differential optimization is proposed to further prune t
 hose weak connections, while guaranteeing the training accuracy. The resul
 ts show that, for multi-field datasets, the compression rate of SC-GNN is 
 40.8 times higher than SOTA methods and the epoch time is reduced to 31.77
 % on average.\n\nTopic: AI, Design\n\nKeyword: AI/ML Architecture Design\n
 \nSession Chairs: Andrey Ayupov (Google) and Subhendu ROY (Cadence Design 
 Systems, Inc.)
END:VEVENT
END:VCALENDAR
