Summary of Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach, by Debarpan Bhattacharya et al.
Gradient-free Post-hoc Explainability Using Distillation Aided Learnable Approach
by Debarpan Bhattacharya, Amir H. Poorjam, Deepak Mittal, Sriram Ganapathy
First submitted to arxiv on: 17 Sep 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a framework for explaining deep models in a post-hoc manner without requiring gradient information. The distillation aided explainability (DAX) approach uses a mask generation network to identify salient regions of input data and a student distillation network to approximate the behavior of a black-box model. The two networks are jointly optimized using locally perturbed input samples and targets derived from input-output access to the black-box model. The authors evaluate DAX across different modalities, including image and audio classification, using various evaluation metrics and compare its performance with nine existing methods. The results show that DAX significantly outperforms existing approaches on all modalities and evaluation metrics. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making AI models more understandable without needing to know how they work. Right now, we have big AI models that are really good at doing certain tasks, but it’s hard for us to figure out why they’re making certain decisions. The authors of this paper want to change that by creating a new way to explain these models in a way that doesn’t require understanding how they were trained. They call their method DAX (distillation aided explainability) and it uses two networks working together to identify important parts of the input data and figure out why the AI model is making certain decisions. The authors test their method on different types of data, like images and sounds, and compare its performance with other methods. They find that their method does a lot better than others at explaining AI models. |
Keywords
* Artificial intelligence * Classification * Distillation * Mask