Loading Now

Summary of Enhancing Adversarial Robustness Of Deep Neural Networks Through Supervised Contrastive Learning, by Longwei Wang et al.


Enhancing Adversarial Robustness of Deep Neural Networks Through Supervised Contrastive Learning

by Longwei Wang, Navid Nayyem, Abdullah Rakin

First submitted to arxiv on: 27 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This novel framework combines supervised contrastive learning and margin-based contrastive loss to enhance adversarial robustness in convolutional neural networks. Supervised contrastive learning improves the structure of the feature space by clustering embeddings of samples within the same class and separating those from different classes, while margin-based contrastive loss enforces explicit constraints to create robust decision boundaries with well-defined margins. Experiments on CIFAR-100 with a ResNet-18 backbone demonstrate improved robustness performance in adversarial accuracy under Fast Gradient Sign Method attacks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps make computer vision models more secure by making them better at spotting fake inputs that can trick them into making mistakes. The researchers use two new techniques to train the models: supervised contrastive learning, which groups similar images together and separates different ones, and margin-based contrastive loss, which makes sure the model has a clear boundary between correct and incorrect answers. They test their approach on a dataset of 100 different image types with a popular computer vision architecture, and it does better at detecting fake inputs than previous methods.

Keywords

» Artificial intelligence  » Clustering  » Contrastive loss  » Resnet  » Supervised