Summary of Adversarial Training in Low-label Regimes with Margin-based Interpolation, by Tian Ye et al.
Adversarial Training in Low-Label Regimes with Margin-Based Interpolation
by Tian Ye, Rajgopal Kannan, Viktor Prasanna
First submitted to arxiv on: 27 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This novel semi-supervised adversarial training approach enhances both robustness and natural accuracy by generating effective adversarial examples. The method begins with linear interpolation between clean and adversarial examples to create interpolated adversarial examples that cross decision boundaries by a controlled margin. This sample-aware strategy tailors adversarial examples to the characteristics of each data point, enabling the model to learn from the most informative perturbations. Additionally, the approach proposes a global epsilon scheduling strategy that progressively adjusts the upper bound of perturbation strengths during training. Empirical evaluations show that this method effectively enhances performance against various adversarial attacks, such as PGD and AutoAttack. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper finds a way to make neural networks more robust to fake or misleading data by creating fake examples that are in between real and fake ones. This helps the network learn from the most important mistakes. The approach also adjusts how much it changes the data during training to make the results better. Tests show this method works well against different types of attacks. |
Keywords
» Artificial intelligence » Semi supervised