Summary of Boosting Adversarial Training Via Fisher-rao Norm-based Regularization, by Xiangyu Yin et al.
Boosting Adversarial Training via Fisher-Rao Norm-based Regularization
by Xiangyu Yin, Wenjie Ruan
First submitted to arxiv on: 26 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper aims to address the open problem of mitigating the degradation of standard generalization performance in adversarial-trained models. Adversarial training is widely used to improve the robustness of deep neural networks, but it often comes at the cost of reduced accuracy. The authors propose a novel regularization framework called Logit-Oriented Adversarial Training (LOAT) that balances robustness and accuracy while introducing minimal computational overhead. By analyzing model complexity using the Fisher-Rao norm, the authors identify a complexity-related variable that correlates with the generalization gap between adversarial-trained and standard-trained models. This insight is used to develop LOAT, which is demonstrated to improve the performance of several prevalent adversarial training algorithms across various network architectures. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to solve a big problem in artificial intelligence called “adversarial robustness”. It’s like trying to make sure a computer program can still work well even if someone tries to trick it. Right now, there are ways to make programs more robust, but they often make the program less good at doing its normal job. The authors of this paper came up with a new idea called Logit-Oriented Adversarial Training (LOAT) that might help fix this problem. They used special math tools to understand how complex computer models can be and found a way to balance making them more robust with keeping them good at their main job. |
Keywords
* Artificial intelligence * Generalization * Regularization