Summary of What Sketch Explainability Really Means For Downstream Tasks, by Hmrishav Bandyopadhyay et al.
What Sketch Explainability Really Means for Downstream Tasks
by Hmrishav Bandyopadhyay, Pinaki Nath Chowdhury, Ayan Kumar Bhunia, Aneeshan Sain, Tao Xiang, Yi-Zhe Song
First submitted to arxiv on: 14 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the role of sketch modality in explainability, highlighting the significance of human strokes in understanding neural network behavior. The authors propose a lightweight plugin that integrates seamlessly with any pre-trained model, eliminating the need for re-training. This solution is demonstrated through four applications: retrieval and generation, assisted drawing, and sketch adversarial attacks. A stroke-level attribution map is central to this approach, taking different forms depending on the downstream task. By addressing rasterisation’s non-differentiability, the authors enable explanations at both coarse (SLA) and partial (P-SLA) stroke levels, each suitable for specific tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how we can understand why artificial intelligence models make certain decisions by using sketches. The researchers create a special tool that can be used with any existing AI model, making it easy to explain the thinking behind the model’s choices. They show how this tool can be used for different tasks such as finding and generating images, drawing, and creating fake images to test an AI system. This technology is important because it helps us understand AI better and make sure we’re using these powerful tools responsibly. |
Keywords
» Artificial intelligence » Neural network