Summary of Soften to Defend: Towards Adversarial Robustness Via Self-guided Label Refinement, by Daiwei Yu and Zhuorong Li and Lina Wei and Canghong Jin and Yun Zhang and Sixian Chan
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement
by Daiwei Yu, Zhuorong Li, Lina Wei, Canghong Jin, Yun Zhang, Sixian Chan
First submitted to arxiv on: 14 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to adversarial training (AT) that addresses the issue of robust overfitting, where deep neural networks become overly dependent on noisy labels during AT. By identifying a connection between robust overfitting and excessive memorization of noisy labels, the authors develop a label refinement method called Self-Guided Label Refinement. This approach refines the label distribution from over-confident hard labels and calibrates training by incorporating knowledge from self-distilled models, eliminating the need for external teachers. The proposed method is evaluated on multiple benchmark datasets, attack types, and architectures, demonstrating improved standard accuracy and robust performance. The authors also provide analyses from an information theory perspective to further understand their approach. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps make deep neural networks more secure by fixing a big problem called robust overfitting. It’s like when you learn something, but then forget it because you got too good at memorizing the wrong answers! The researchers discovered that this happens during “adversarial training” – where we try to teach computers to resist fake data. They developed a new way to improve this process by refining the labels (like correcting mistakes) and making sure the computer learns from its own mistakes, rather than relying on someone else’s guidance. This makes the computer more accurate and robust against different types of fake data. |
Keywords
* Artificial intelligence * Overfitting