Summary of Diffam: Diffusion-based Adversarial Makeup Transfer For Facial Privacy Protection, by Yuhao Sun et al.
DiffAM: Diffusion-based Adversarial Makeup Transfer for Facial Privacy Protection
by Yuhao Sun, Lingyun Yu, Hongtao Xie, Jiaming Li, Yongdong Zhang
First submitted to arxiv on: 16 May 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel face protection approach, dubbed DiffAM, to generate high-quality protected face images with adversarial makeup transferred from reference images. The method leverages the powerful generative ability of diffusion models and consists of two main components: a makeup removal module and an ensemble attack strategy. The former is used to generate non-makeup images using a fine-tuned diffusion model guided by textual prompts in CLIP space, while the latter jointly guides the direction of adversarial makeup domain. The approach achieves higher visual quality and attack success rates with a gain of 12.98% under black-box setting compared to state-of-the-art methods. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper proposes a new way to protect face images from unauthorized face recognition systems. It uses a special kind of computer program called a diffusion model to make the protected face images look natural and hard for hackers to copy. The method works by first removing makeup from an image, then adding fake makeup to another image that looks like the original one. This makes it harder for hackers to recognize the faces in the protected images. The paper shows that this method is better than other methods at keeping face images private. |
Keywords
» Artificial intelligence » Diffusion » Diffusion model » Face recognition