Summary of 2d-malafide: Adversarial Attacks Against Face Deepfake Detection Systems, by Chiara Galdi et al.
2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems
by Chiara Galdi, Michele Panariello, Massimiliano Todisco, Nicholas Evans
First submitted to arxiv on: 26 Aug 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Cryptography and Security (cs.CR); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary We introduce 2D-Malafide, a novel and lightweight adversarial attack designed to deceive face deepfake detection systems. Building upon 1D convolutional perturbations in the speech domain, our method leverages 2D convolutional filters to craft robust perturbations that significantly degrade state-of-the-art detector performance. Unlike additive noise approaches, 2D-Malafide optimizes filter coefficients for transferable adversarial perturbations across different face images. Our experiments on the FaceForensics++ dataset demonstrate that 2D-Malafide degrades detection performance in both white-box and black-box settings, with larger filter sizes having the greatest impact. We also report an explainability analysis using GradCAM, illustrating how 2D-Malafide misleads detection systems by altering image areas used for classification. Our findings highlight the vulnerability of current deepfake detection systems to convolutional adversarial attacks and emphasize the need for future work on enhancing detection robustness through improved image fidelity constraints. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary We created a new way to trick face deepfake detectors, called 2D-Malafide. This method uses special filters to make small changes to face images that can fool even the best detectors. Unlike other methods, our approach makes these changes in a way that works across different face images. We tested this method on a large dataset and found that it can significantly reduce the accuracy of detectors. We also showed how our method works by looking at which parts of the image are most important for classification. Our research shows that current deepfake detection systems have some weaknesses and need to be improved to make them more robust. |
Keywords
» Artificial intelligence » Classification