Summary of The Fourth International Verification Of Neural Networks Competition (vnn-comp 2023): Summary and Results, by Christopher Brix et al.
The Fourth International Verification of Neural Networks Competition (VNN-COMP 2023): Summary and Results
by Christopher Brix, Stanley Bak, Changliu Liu, Taylor T. Johnson
First submitted to arxiv on: 28 Dec 2023
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Software Engineering (cs.SE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this year’s International Verification of Neural Networks Competition (VNN-COMP), held at the 6th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS) alongside the 35th International Conference on Computer-Aided Verification (CAV), seven teams participated in evaluating state-of-the-art neural network verification tools. The competition utilized standardized formats, ONNX and VNN-LIB, to ensure fair comparison among tools. Participants chose tool parameters before testing on a diverse set of 10 scored and 4 unscored benchmarks. This summary reports the rules, participating tools, results, and lessons learned from this iteration of the competition. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This year’s neural network verification competition brought together seven teams to test their skills in verifying neural networks. The goal was to see whose tool could verify networks best. To make it fair, everyone used the same formats for networks (ONNX) and specifications (VNN-LIB). Each team chose how they wanted their tools to work before testing on a bunch of different benchmarks. This is what happened. |
Keywords
* Artificial intelligence * Neural network