Summary of How to Craft Backdoors with Unlabeled Data Alone?, by Yifei Wang et al.
How to Craft Backdoors with Unlabeled Data Alone?
by Yifei Wang, Wenhan Ma, Stefanie Jegelka, Yisen Wang
First submitted to arxiv on: 10 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers investigate the potential risks of self-supervised learning (SSL) models being poisoned with malicious data. They focus on a specific type of attack called no-label backdoors, where the attacker only has access to unlabeled data and cannot use labeled data to train the model. The authors propose two strategies for selecting the proper poison set without using label information: clustering-based selection using pseudolabels and contrastive selection derived from the mutual information principle. They demonstrate the effectiveness of these attacks on various SSL methods and datasets, including CIFAR-10 and ImageNet-100. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how to make self-supervised learning models behave badly by adding a little bit of bad data to them. This is called a “no-label backdoor” attack because it only works with the kind of data that doesn’t have any labels on it. The researchers came up with two ways to do this: one way uses groups of similar-looking pictures, and the other way uses a special formula based on how much different things look from each other. They tried these attacks on some popular self-supervised learning models and showed that they were very successful. |
Keywords
* Artificial intelligence * Clustering * Self supervised