Summary of Explaining Drift Using Shapley Values, by Narayanan U. Edakunni and Utkarsh Tekriwal and Anukriti Jain
Explaining Drift using Shapley Values
by Narayanan U. Edakunni, Utkarsh Tekriwal, Anukriti Jain
First submitted to arxiv on: 18 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed framework, DBShap, utilizes Shapley values to identify and quantify the primary contributors of concept drift in machine learning models. This novel approach not only determines the importance of individual features driving the drift but also considers changes in the underlying input-output relationships as potential drivers. By providing explanations for the root causes behind the drift, DBShap enables model developers to make their models more resilient to such changes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new way to figure out why machine learning models don’t work well when they’re used on data that’s different from what they were trained on. This problem happens often in real-life situations, like during pandemics. Scientists have tried many ways to make their models better at handling these changes, but they didn’t have a clear plan for finding the reasons behind the problems. The authors suggest a new method called DBShap that helps identify and measure what’s causing the issues. This information can be used to fix the models so they work better in different situations. |
Keywords
* Artificial intelligence * Machine learning