Summary of Adversarial Sparse Teacher: Defense Against Distillation-based Model Stealing Attacks Using Adversarial Examples, by Eda Yilmaz and Hacer Yalim Keles
Adversarial Sparse Teacher: Defense Against Distillation-Based Model Stealing Attacks Using Adversarial Examples
by Eda Yilmaz, Hacer Yalim Keles
First submitted to arxiv on: 8 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary AST introduces a robust defense method against distillation-based model stealing attacks by training a teacher model using adversarial examples to produce sparse logit responses and increase the output distribution’s entropy. This approach modifies the original response, embedding altered logits into the output while elevating all remaining logits to maintain higher entropy levels. The proposed Exponential Predictive Divergence (EPD) loss function allows for effective confusion of attackers. AST outperforms state-of-the-art methods on CIFAR-10 and CIFAR-100 datasets, providing effective defense against model stealing while preserving high accuracy. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary AST is a new way to protect models from being copied by making their answers harder to understand. It does this by using special examples to make the teacher model give confusing answers. This makes it hard for attackers to copy the model and use it for themselves. AST was tested on two big datasets and did better than other methods at keeping the model safe while still giving good answers. |
Keywords
* Artificial intelligence * Distillation * Embedding * Logits * Loss function * Teacher model