BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
X-LIC-LOCATION:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:19700308T020000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:19701101T020000
RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20240626T180002Z
LOCATION:3002\, 3rd Floor
DTSTART;TZID=America/Los_Angeles:20240627T133000
DTEND;TZID=America/Los_Angeles:20240627T153000
UID:dac_DAC 2024_sess104@linklings.com
SUMMARY:Foundation Models for EDA and Beyond
DESCRIPTION:Research Manuscript\n\nThis session will dive into the interse
 ction of machine learning with EDA and other cutting-edge applications. At
 tendees will witness how large language models (LLMs) revolutionize tasks 
 from fixing RTL syntax errors, designing operational amplifiers and dramat
 ically cutting down large training times of protein folding with AlphaFold
 . The session will subsequently explore sustainable benchmarking in accele
 rator-aware NAS, real-time network traffic analytics, anomaly detection at
  the edge and ML-driven optimization of physical design parameters for 3D 
 ICs.\n\nAutomatically Fixing RTL Syntax Errors with Large Language Model\n
 \nThis paper presents RTLFixer, a novel framework enabling automatic synta
 x errors fixing for Verilog code with Large Language Models (LLMs). Despit
 e LLM's promising capabilities, our analysis indicates that approximately 
 55\% of errors in LLM-generated Verilog are syntax-related, leading to com
 pilati...\n\n\nYunDa Tsai, Mingjie Liu, and Haoxing Ren (NVIDIA)\n--------
 -------------\nArtisan: Automated Operational Amplifier Design via Domain-
 specific Large Language Model\n\nThis paper presents Artisan, an automated
  operational amplifier design framework using large language models. We de
 velop a bidirectional representation to align abstract circuit topologies 
 with their structural and functional semantics. We further employ Tree-of-
 Thoughts and Chain-of-Thoughts approa...\n\n\nZihao Chen, Jiangli Huang, Y
 iting Liu, Fan Yang, and Li Shang (Fudan University); Dian Zhou (The Unive
 rsity of Texas at Dallas); and Xuan Zeng (Fudan University)\n-------------
 --------\nScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours\
 n\nAlphaFold2 has been hailed as a breakthrough in protein folding. It can
  rapidly predict protein structures with lab-grade accuracy. However, its 
 training procedure is prohibitively time-consuming, and gets diminishing b
 enefits from scaling to more compute resources. In this work, we conducted
  a comp...\n\n\nFeiwen Zhu, Arkadiusz Nowaczynski, Rundong Li, Jie Xin, Yi
 fei Song, Michal Marcinkiewicz, Sukru Eryilmaz, June Yang, and Michael And
 ersch (NVIDIA)\n---------------------\nVARADE: a Variational-based AutoReg
 ressive model for Anomaly Detection on the Edge\n\nDetecting complex anoma
 lies on massive amounts of data is a crucial task in Industry 4.0, best ad
 dressed by deep learning. However, available solutions are computationally
  demanding, requiring cloud architectures prone to latency and bandwidth i
 ssues. This work presents VARADE, a novel solution impl...\n\n\nAlessio Ma
 scolini (Politecnico di Torino); Sebastiano Gaiardelli (University of Vero
 na); Francesco Ponzio (Politecnico di Torino); Nicola Dall'Ora (University
  of Verona); Enrico Macii, Sara Vinco, and Santa di Cataldo (Politecnico d
 i Torino); and Franco Fummi (University of Verona)\n---------------------\
 nData is all you need:  Finetuning LLMs for Chip Design via an Automated d
 esign-data augmentation framework\n\nRecent advances in large language mod
 els have demonstrated their potential for automated generation of Verilog 
 code from high-level prompts. Researchers have utilized fine-tuning to enh
 ance the ability of these large language models (LLMs) in the field of Chi
 p Design. However, the lack of Verilog da...\n\n\nKaiyan Chang (State Key 
 Lab of Processors, Institute of Computing Technology, Chinese Academy of S
 ciences); Kun Wang and Nan Yang (Institute of Computing Technology, Chines
 e Academy of Sciences); Ying Wang (State Key Laboratory of Computer Archit
 ecture, Institute of Computing Technology, Chinese Academy of Sciences, Un
 iversity of Chinese Academy of Sciences); Dantong Jin (Zhejiang Lab); Wenl
 ong Zhu (Institute of Computing Technology, Chinese Academy of Sciences); 
 Zhirong Chen (Zhejiang University); Cangyuan Li (Institute of Computing Te
 chnology, Chinese Academy of Sciences); Hao Yan (Shanghai University); Yun
 hao Zhou (Shanghai Jiao Tong University); Zhuoliang Zhao (Fudan University
 ); Yuan Cheng (Nanjing University); Yudong Pan, Yiqi Liu, and Mengdi Wang 
 (Institute of Computing Technology, Chinese Academy of Sciences); Shengwen
  Liang (State Key Lab of Processors, Institute of Computing Technology, Ch
 inese Academy of Sciences); and yinhe han, Huawei Li, and Xiaowei Li (Inst
 itute of Computing Technology, Chinese Academy of Sciences)\n-------------
 --------\nML-based Physical Design Parameter Optimization for 3D ICs: From
  Parameter Selection to Optimization\n\nWhile various studies have shown e
 ffective parameter optimizations for specific designs, there is limited ex
 ploration of parameter optimization within the domain of 3D Integrated Cir
 cuits. We present the first comprehensive study, both qualitatively and qu
 antitatively, comparing five state-of-the-ar...\n\n\nHao-Hsiang Hsiao, Pru
 ek Vanna-iampikul, Yi-Chen Lu, and Sung Kyu Lim (Georgia Institute of Tech
 nology)\n---------------------\nAccel-NASBench: Sustainable Benchmarking f
 or Accelerator-Aware NAS\n\nOne of the primary challenges impeding the pro
 gress of Neural Architecture Search (NAS) is its extensive reliance on exo
 rbitant computational resources. NAS benchmarks aim to simulate runs of NA
 S experiments at zero cost, remediating the need for extensive compute. Ho
 wever, existing NAS benchmarks u...\n\n\nAfzal Ahmad, Linfeng Du, Zhiyao X
 ie, and Wei Zhang (Hong Kong University of Science and Technology (HKUST))
 \n---------------------\nTrafficHD: Efficient Hyperdimensional Computing f
 or Real-Time Network Traffic Analytics\n\nWith the evolution of network in
 frastructure, the pattern of network traffic becomes unprecedentedly compl
 ex. Conventional machine learning algorithms are struggling to cope with t
 he high-dimensional data and real-time processing speeds required in such 
 complex networks. Fortunately, hyperdimensiona...\n\n\nHaodong Lu, Zhiyuan
  Ma, Xinran Li, Shiyan Bi, Xiaoming He, and Kun Wang (Fudan University)\n\
 nTopic: AI\n\nKeyword: AI/ML Application and Infrastructure\n\nSession Cha
 irs: Cong (Callie) Hao (Georgia Institute of Technology) and Haoxing “Mark
 ” Ren (NVIDIA)
END:VEVENT
END:VCALENDAR
