Summary of Game-theoretic Unlearnable Example Generator, by Shuang Liu and Yihan Wang and Xiao-shan Gao
Game-Theoretic Unlearnable Example Generator
by Shuang Liu, Yihan Wang, Xiao-Shan Gao
First submitted to arxiv on: 31 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores unlearnable example attacks, a type of data poisoning attack that aims to degrade the test accuracy of deep learning models. The authors formulate this problem as a bi-level optimization issue, but finding a solution is challenging for deep neural networks. Instead, they approach this from a game-theoretic perspective, framing it as a Stackelberg game with nonzero-sum payoffs. They prove the existence of equilibria under normal and adversarial training settings, showing that these equilibrium points yield the most powerful poison attacks. The authors also propose a novel attack method called Game Unlearnable Example (GUE), which involves generating poisons using an autoencoder-like network model and evaluating performance with a new payoff function. Experimental results demonstrate GUE’s effectiveness in various scenarios, even when training the generator with only a small percentage of data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how to make deep learning models perform poorly on test data. The authors use game theory to understand how an attacker might add tiny changes to the model’s training data to make it worse. They show that this attack can be successful and propose a new way to do it called GUE (Game Unlearnable Example). GUE uses a special kind of network to create these changes, or “poisons.” The authors tested GUE on different models and showed that it works well even when using only a little bit of data. |
Keywords
* Artificial intelligence * Autoencoder * Deep learning * Optimization