Summary of Learning Local Discrete Features in Explainable-by-design Convolutional Neural Networks, by Pantelis I. Kaplanoglou et al.
Learning local discrete features in explainable-by-design convolutional neural networks
by Pantelis I. Kaplanoglou, Konstantinos Diamantaras
First submitted to arxiv on: 31 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed ExplaiNet framework combines high-accuracy convolutional neural networks (CNNs) with probabilistic graphs to achieve explainability-by-design. The model consists of a predictor CNN with residual or dense skip connections, and an explainer graph that expresses spatial interactions between network neurons. The explainer graph uses local discrete feature (LDF) vectors, which are learned with gradient descent and represent the indices of antagonistic neurons ordered by their activations. By repurposing EXTREME, a sequence motif discovery method, LDFs can be used as sequences to increase explanation conciseness. The framework leverages Bayesian networks’ inherent explainability to attribute model output to global motifs. Experiments on tiny image benchmark datasets confirm the predictor’s performance matches baseline architecture for given parameters and layers. The novel method shows promise in exceeding this performance while providing additional explanations. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper proposes a new way to make deep learning models more understandable. They create a special type of neural network that can explain its decisions. This is achieved by combining a normal neural network with a graph that represents how the different parts of the network work together. The model learns to represent the most important features it uses, which helps us understand why it made certain predictions. The authors tested their approach on several image classification tasks and found that it performs as well as other methods while providing more insight into its decisions. |
Keywords
» Artificial intelligence » Cnn » Deep learning » Gradient descent » Image classification » Neural network