Close

Presentation

Breaking the Formal Convergence Barriers of a Floating-Point Dot-Product Block for AI/ML Accelerators
DescriptionDot-product compute engines are pivotal to any of the AI/ML hardware accelerators. Multi-term and floating-point dot-product engines increase the datapath complexity due to added logic for rounding, normalization, and alignment of significands per maximum exponent. To formally verify such dot-product compute engines, a C/C++ vs RTL formal check tool (e.g. Synopsys's VC Formal DPV) is used. The datapath complexity of a multi-term and floating-point dot-product engine for a complex AI/ML chip along with different dataflow graph (DFG) structures of the corresponding C/C++ and RTL models often make it difficult for the formal tool to converge. This research depicts various techniques (assume-guarantee, lemma partitioning, DFG optimization, maximizing equivalence points, case splitting, and using optimized solvers) that are adopted to obtain formal convergence across several floating-point types. Moreover, we enable helper lemmas after coming up with adder tree expressions to match the RTL and C-model adder tree structures. The results demonstrate that a formal run for a multi-term FP32-based dot-product operation can converge within 30mins. We recommend a new feature for the VC Formal DPV tool to streamline detection of the adder trees and automatically resolving them in the flow, which Synopsys is currently working on.
Event Type
Front-End Design
TimeMonday, June 242:15pm - 2:30pm PDT
Location2010, 2nd Floor
Topics
Design
Engineering Tracks
Front-End Design