BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240626T180033Z
LOCATION:3012\, 3rd Floor
DTSTART;TZID=America/Los_Angeles:20240625T161500
DTEND;TZID=America/Los_Angeles:20240625T163000
UID:dac_DAC 2024_sess112_RESEARCH1693@linklings.com
SUMMARY:Hardware-Aware Neural Dropout Search for Reliable Uncertainty Pred
 iction on FPGA
DESCRIPTION:Research Manuscript\n\nZehuan Zhang (Imperial College London),
  Hongxiang Fan (Samsung), Hao (Mark) Chen (Imperial College London), Lukas
 z Dudziak (Samsung), and Wayne Luk (Imperial College London)\n\nThe increa
 sing deployment of artificial intelligence (AI) for critical decision-maki
 ng amplifies the necessity for trustworthy AI, where uncertainty estimatio
 n plays a pivotal role in ensuring trustworthiness. Dropout-based Bayesian
  Neural Networks (BayesNNs) are  prominent in this field, offering reliabl
 e uncertainty estimates. Despite their effectiveness, existing dropout-bas
 ed BayesNNs typically employ a uniform dropout design across different lay
 ers, leading to suboptimal performance. Moreover, as diverse applications 
 require tailored dropout strategies for optimal performance, manually opti
 mizing dropout configurations for various applications is both error prone
  and labor-intensive. To address these challenges, this paper proposes a n
 ovel neural dropout search framework that automatically optimizes both the
  dropout-based BayesNNs and their hardware implementations on FPGA. We lev
 erage one-shot supernet training with an evolutionary algorithm for effici
 ent dropout optimization. A layer-wise dropout search space is introduced 
 to enable the automatic design of dropout-based BayesNNs with heterogeneou
 s dropout settings. Extensive experiments demonstrate that our proposed fr
 amework can effectively find design configurations on the Pareto frontier.
  Compared to manually-designed dropoutbased BayesNNs on GPU, our search ap
 proach produces FPGA designs that can achieve up to 33× higher energy effi
 ciency. Compared to state-of-the-art FPGA designs of BayesNN, the solution
 s from our approach can achieve higher algorithmic performance. Our design
 s and tools will be open-source upon paper acceptance.\n\nTopic: Design\n\
 nKeyword: SoC, Heterogeneous, and Reconfigurable Architectures\n\nSession 
 Chairs: Dimitrios Soudris (National Technical University of Athens) and Ge
 orge Tzimpragos (University of Michigan)
END:VEVENT
END:VCALENDAR
