Close

Presentation

Are Adversarial Examples Suitable To Be Test Suites for Testing Deep Neural Networks
DescriptionMost existing works reveal that the deep learning system is extremely susceptible to adversarial examples (AEs), which continually reverberate around the community of DL testing. Consequently, adversarial attacks are exploited to test the robustness of DL model, especially optimized gradient-based technologies in white-box testing. Although AEs have achieved competitive performance in fault revealing-ability and coverage improvement-ability in DL testing, there is little research analyzing the phenomenon theoretically. In this work, we give a formal analysis between gradient-based attack and loss minima of the loss function to prove that powerful adversaries will share similar feature representations with a high probability. Our extensive evaluation and theoretical analysis revealed that (1) the optimized gradient-based technologies can only cover several limited decision logic which is apparently contradictory to the diversity of test suites, (2) the reasons why adversarial examples can increase test coverage, and (3) the weaknesses of AEs by comparing with search-based and fuzz-based test suites generation technologies. Finally, our results prove that AEs can efficiently discover the vulnerability of DL model but are not suitable to explore more inner logic as test suites.
Event Type
Work-in-Progress Poster
TimeTuesday, June 256:00pm - 7:00pm PDT
LocationLevel 2 Lobby
Topics
AI
Autonomous Systems
Cloud
Design
EDA
Embedded Systems
IP
Security