Summary of Understanding Multimodal Deep Neural Networks: a Concept Selection View, by Chenming Shang et al.
Understanding Multimodal Deep Neural Networks: A Concept Selection View
by Chenming Shang, Hengyuan Zhang, Hao Wen, Yujiu Yang
First submitted to arxiv on: 13 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the decision-making process of CLIP, a multimodal deep neural network with excellent performance. By mapping visual representations onto human-understandable concepts, concept-based models enhance transparency. However, these methods rely on labeled datasets with fine-grained attributes, incurring high costs and introducing bias. The proposed two-stage Concept Selection Model (CSM) mines core concepts without prior knowledge or bias. A greedy rough selection algorithm extracts head concepts, followed by a mask fine selection method for core concept extraction. Results show comparable performance to black-box models, with human-evaluated concepts found interpretable and comprehensible. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about understanding how CLIP, a very good AI model, makes decisions. Right now, it’s hard to know why it chooses certain things because its process is too complex. To fix this, we can use “concept-based” models that show what the AI sees in terms of simple ideas like shapes and colors. But these models need special help from people to work well. The new idea in this paper is a way to find important concepts without needing human help. This makes it easier for people to understand why the AI chose certain things, which is very useful. |
Keywords
* Artificial intelligence * Mask * Neural network