Summary of Gleams: Bridging the Gap Between Local and Global Explanations, by Giorgio Visani et al.
GLEAMS: Bridging the Gap Between Local and Global Explanations
by Giorgio Visani, Vincenzo Stanzione, Damien Garreau
First submitted to arxiv on: 9 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel machine learning method, GLEAMS, is proposed to bridge the gap between local post-hoc explainability methods and global approaches. These local methods assign feature importance scores but require recalculating explanations for each example. Global methods often produce overly simplistic or complex explanations. GLEAMS partitions the input space, learns an interpretable model within each sub-region, and provides both faithful local and global surrogates. The method is demonstrated on synthetic and real-world data, showcasing its desirable properties and human-understandable insights. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Machine learning algorithms are becoming more important in making predictions, but people want to know why they’re making those predictions. To explain these predictions, many methods have been developed. Some of these methods only work for one example at a time, while others make the explanation too simple or too complicated. GLEAMS is a new method that solves this problem by dividing the input space into smaller regions and learning an interpretable model within each region. This makes both local and global explanations possible. |
Keywords
* Artificial intelligence * Machine learning