Summary of Certified Robust Accuracy Of Neural Networks Are Bounded Due to Bayes Errors, by Ruihan Zhang and Jun Sun
Certified Robust Accuracy of Neural Networks Are Bounded due to Bayes Errors
by Ruihan Zhang, Jun Sun
First submitted to arxiv on: 19 May 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Cryptography and Security (cs.CR); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper tackles the challenge of achieving robustness against adversarial examples while maintaining model accuracy. The authors propose a novel perspective using Bayes errors, which reveals that the pursuit of robustness inevitably leads to a decrease in accuracy due to changes in data distribution uncertainty. Building on this insight, they establish an upper bound for certified robust accuracy by considering individual class distributions and boundaries. Experimental results on real-world datasets confirm the theoretical findings, highlighting the limitations of existing certified training methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper explores how to make neural networks more secure against fake data. Currently, making models more robust makes them less accurate. The authors investigate why this happens using a concept called Bayes errors. They show that trying to be too robust always makes accuracy worse. Then, they set a limit for how good you can be at both being robust and accurate. They test their ideas on real-world datasets and find that existing methods have only been able to improve things a little bit over the past few years. |