Close

Presentation

Invited: New Solutions on LLM Acceleration, Optimization, and Application
DescriptionIn this talk, we address the longstanding challenge of automating the optimization and verification of High-level Synthesis (HLS)-based hardware accelerators. Traditional methods, including machine learning and compilation-based approaches, have been hindered by limitations in either the quality of results or their generalizability. To overcome these limitations, we introduce a novel framework utilizing Large Language Model (LLM) techniques. We begin by constructing an extensive HLS design and bug dataset, comprising 1113 real-world HLS designs sourced from 12 diverse HLS libraries and benchmark suites. This dataset is enhanced using LLM to inject complex logical HLS bugs, which cannot be captured by traditional HLS tools. Leveraging this enriched dataset, we develop and train a custom LLM specialized to not only generate optimized HLS designs (e.g., inserting optimization directives), but also to accurately identify HLS bugs in given HLS designs. Our experiments demonstrate that this model surpasses ChatGPT-4 Turbo in delivering higher quality optimizations and more accurate bug detection in HLS designs, while maintaining a smaller model size and inference latency. Collectively, these integrated frameworks mark a substantial advancement in AI accelerator design. They not only enhance the efficiency and accessibility of AI accelerator development but also serve as a bridge between AI algorithmic advancements and hardware innovation.
Event Type
Special Session (Research)
TimeWednesday, June 261:30pm - 2:00pm PDT
Location3006, 3rd Floor
Topics
AI