Summary of New Paradigm Of Adversarial Training: Breaking Inherent Trade-off Between Accuracy and Robustness Via Dummy Classes, by Yanyun Wang et al.
New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
by Yanyun Wang, Li Liu, Zi Liang, Qingqing Ye, Haibo Hu
First submitted to arxiv on: 16 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A recent paper proposes a new approach to Adversarial Training (AT) for enhancing the robustness of Deep Neural Networks (DNNs). The current AT paradigm suffers from an inherent trade-off between adversarial robustness and clean accuracy, which hinders real-world deployment. This limitation is addressed by introducing an additional dummy class for each original class in a new AT paradigm, accommodating hard adversarial samples with shifted distributions after perturbation. The proposed DUmmy Classes-based Adversarial Training (DUCAT) technology achieves concurrent improvements in both clean accuracy and adversarial robustness on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets, effectively breaking the existing trade-off. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper talks about a new way to make Deep Neural Networks (DNNs) more secure against fake data. Right now, there’s a problem with making DNNs robust: they either get really good at handling fake data or getting accurate results on normal data, but not both. The authors introduce an idea that helps solve this by adding extra classes for the fake data and allowing the network to learn from it. They call this new approach “DUCAT” (Dummy Classes-based Adversarial Training). The results show that DUCAT does a great job at making DNNs more robust while also keeping their accuracy high. |