Summary of Only My Model on My Data: a Privacy Preserving Approach Protecting One Model and Deceiving Unauthorized Black-box Models, by Weiheng Chai et al.
Only My Model On My Data: A Privacy Preserving Approach Protecting one Model and Deceiving Unauthorized Black-Box Models
by Weiheng Chai, Brian Testa, Huantao Ren, Asif Salekin, Senem Velipasalar
First submitted to arxiv on: 14 Feb 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This pioneering study addresses an unexplored practical use case in privacy preservation by generating human-perceivable images that maintain accurate inference for authorized models while evading unauthorized black-box models of similar or dissimilar objectives. The proposed method tackles the limitations of existing approaches like encryption and adversarial attacks, which either generate perturbed images unrecognizable to humans or prohibit automated inference for all stakeholders. By leveraging datasets such as ImageNet, Celeba-HQ, and AffectNet, the study demonstrates that generated images can successfully maintain the accuracy of a protected model while degrading the average accuracy of unauthorized black-box models to 11.97%, 6.63%, and 55.51% respectively. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper generates human-perceivable images that keep accurate information for authorized models, but not for others. It uses three big datasets: ImageNet for pictures, Celeba-HQ for faces, and AffectNet for emotions. The results show that these fake images work well for the good guys and make it hard for bad guys to understand them. |
Keywords
* Artificial intelligence * Inference