Summary of Measuring Feature Dependency Of Neural Networks by Collapsing Feature Dimensions in the Data Manifold, By Yinzhu Jin et al.
Measuring Feature Dependency of Neural Networks by Collapsing Feature Dimensions in the Data Manifold
by Yinzhu Jin, Matthew B. Dwyer, P. Thomas Fletcher
First submitted to arxiv on: 18 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to measure the feature dependency of neural network models. The technique is designed to understand whether a model relies on human-understandable features such as anatomical shape, volume, or image texture. By removing specific features and analyzing how the model’s performance changes, researchers can gain insights into how the model uses information from different dimensions. The authors test their method on synthetic image data, Alzheimer’s disease prediction using MRI and hippocampus segmentations, and cell nuclei classification using the Lizard dataset. The proposed technique has the potential to improve our understanding of deep neural networks by revealing which features they rely on. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to understand how computer models work. It helps us figure out if these models are using important information from pictures or other data that humans can understand. To do this, scientists take away specific details and see how the model’s performance changes. They tested this method with fake images, images of brains for Alzheimer’s diagnosis, and cell pictures. This new technique can help us learn more about how computer models think. |
Keywords
» Artificial intelligence » Classification » Neural network