Organizer:
David Kelf
Moderator:
Karl Freund
Founder and principal Analyst Cambrian AI Research
Panelists:
Tushit Jain
Senior Director, Machine Learning Research
Qualcomm
John Rose
Product Engineering Group Director
Cadence
Ravi Gal
IBM Research
IBM
Daniel Hansson
Founder & CEO
Verifyter
Adnan Hamid
CEO
Breker Verification Systems
Darren Galpin
Principal Digital Verification Engineer
Renesas

Verification productivity has historically been largely dependent on performance tooling, and engineering ingenuity in driving these tools. The entire verification loop from test content composition, execution, debug and coverage management features engineers manually devising test programs and analyzing the results.
Verification continues to evolve with increasing functional requirements augmented by SoC integrity. 5G, Autonomous Driving, Quantum Computing and other applications drive verification complexity to manual engineering limits. Engineers need some help to contain the verification explosion driven by this expansion.
Machine Learning (ML) is proving itself a powerful weapon in the hands of engineers analyzing incredibly massive and difficult challenges. ML can be a vital aid to verification engineers struggling with these challenges under increasing schedule pressure and quality demands.
For verification, ML may be used to drive test content efficiency and reduce redundancy, predict potential bugs in big data regression output, select appropriate engines to tackle specific problems in formal and test synthesis tools, and a myriad of other tasks. However, some might argue that an overreliance on ML could reduce quality and result in missed issues.
Moderated by AI expert Karl Freund, this panel of experts, who have either developed or used verification technologies that leverage ML, will discuss and contrast the use and value of ML by examining real world applications in this area.