Close

Presentation

From RTL to Prompt: AN LLM-assisted Verification Methodology for General Processor
DescriptionRTL verification has existed for decades and is crucial for identifying potential bugs before chip tape-out. However, hand-crafting test cases are time-consuming and error-prone, even for experienced test engineers. Prior work has attempted to lighten this burden by rule-guided random generation. However, this does not eliminate the manual effort of writing rules about the detailed hardware behavior. Motivated by the increased need for RTL verification in the era of Domain-Specific Architecture (DSA) and the advances in large language models (LLMs), we set out to explore whether LLMs can capture RTL behavior and generate test cases automatically by introducing three distinct prompt approaches to enhance the LLM's ability to generate tests. We utilize the GPT-3.5, an advanced LLM, to verify a 12-stage, multi-issue, out-of-order RV64GC processor, achieving a 14% increase in block coverage rate and an 11% increase in expression coverage rate compared to randomization. Moreover, the combination of LLM and handcrafting achieves great optimization of human resources, which demonstrates a potential methodology for future processor verification. In addition, we provide an open-source prompt library integrated with GPT-3.5 applicable, providing a standardized set of prompts for catering to a diverse range of processor verification scenarios. The prompt library is available at https://github.com/From-RTL-to-Prompt/LLM-prompt-library.
Event Type
Work-in-Progress Poster
TimeTuesday, June 256:00pm - 7:00pm PDT
LocationLevel 2 Lobby
Topics
AI
Autonomous Systems
Cloud
Design
EDA
Embedded Systems
IP
Security