Summary of Profeat: Projected Feature Adversarial Training For Self-supervised Learning Of Robust Representations, by Sravanti Addepalli et al.
ProFeAT: Projected Feature Adversarial Training for Self-Supervised Learning of Robust Representations
by Sravanti Addepalli, Priyam Dey, R. Venkatesh Babu
First submitted to arxiv on: 9 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the application of Self-Supervised Learning (SSL) techniques with Adversarial Training (AT), which has been sub-optimal due to increased training complexity. The authors propose Projected Feature Adversarial Training (ProFeAT) to bridge the gap between supervised AT and SSL-AT methods. ProFeAT utilizes a projection head at the student, allowing it to leverage weak supervision from the teacher while learning adversarially robust representations. The method combines weak and strong augmentations for the teacher and student, respectively, to improve training data diversity without increasing complexity. Extensive experiments on benchmark datasets and models show significant improvements in clean and robust accuracy compared to existing SSL-AT methods, setting a new state-of-the-art. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper explores ways to use Self-Supervised Learning (SSL) with Adversarial Training (AT). Currently, combining these two approaches doesn’t work well because it’s too complicated. The authors propose a new method called Projected Feature Adversarial Training (ProFeAT). This method helps the student learn from weak supervision while also learning robust representations. It combines different types of augmentations to make the training process more diverse and efficient. This leads to better results than previous methods, especially for larger models. |
Keywords
* Artificial intelligence * Self supervised * Supervised