Summary of Towards Quantitative Evaluation Of Explainable Ai Methods For Deepfake Detection, by Konstantinos Tsigos et al.
Towards Quantitative Evaluation of Explainable AI Methods for Deepfake Detection
by Konstantinos Tsigos, Evlampios Apostolidis, Spyridon Baxevanakis, Symeon Papadopoulos, Vasileios Mezaris
First submitted to arxiv on: 29 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed framework evaluates the performance of explanation methods on a deepfake detector’s decisions by analyzing the impact of adversarial attacks on image regions that influence the detection. The framework assesses the ability to spot these influential regions and modify them to flip or reduce the initial prediction. A comparative study is conducted using a state-of-the-art model trained on FaceForensics++ and five explanation methods from the literature, with LIME exhibiting advanced performance and being the most suitable for explaining deepfake detector decisions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper introduces a new framework to evaluate explanation methods for a deepfake detector’s decisions. It tests how well these methods can identify regions in fake images that affect the detection outcome. The study compares five different explanation methods with a state-of-the-art deepfake detector trained on FaceForensics++ data. The results show that LIME is the best method for explaining the detector’s decisions. |