Summary of Towards Certification Of Uncertainty Calibration Under Adversarial Attacks, by Cornelius Emde et al.
Towards Certification of Uncertainty Calibration under Adversarial Attacks
by Cornelius Emde, Francesco Pinto, Thomas Lukasiewicz, Philip H.S. Torr, Adel Bibi
First submitted to arxiv on: 22 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A new paper explores ways to ensure that neural classifiers are robust against adversarial perturbations, which can drastically impact their accuracy. To achieve this, the authors develop certified methods for measuring the insensitivity of predictions to such perturbations. Additionally, they focus on model calibration, a crucial aspect in safety-critical applications where the confidence of a classifier is vital. The paper shows that attacks can severely harm calibration and proposes certified calibration as worst-case bounds on calibration under adversarial perturbations. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new study looks at making sure neural networks are not tricked by fake data that can change their answers. To do this, the researchers create special methods to measure how much a network’s predictions change when given wrong information. They also think about something called model calibration, which is important in situations where it matters what a network really thinks. The paper shows that bad guys can make networks think they’re more confident than they should be and proposes new ways to stop this from happening. |