Summary of Root Causing Prediction Anomalies Using Explainable Ai, by Ramanathan Vishnampet et al.
Root Causing Prediction Anomalies Using Explainable AI
by Ramanathan Vishnampet, Rajesh Shenoy, Jianhui Chen, Anuj Gupta
First submitted to arxiv on: 4 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel application of Explainable AI (XAI) to identify the root cause of performance degradation in machine learning models that learn from user engagement data. The approach is designed to improve the reliability of personalized advertising models, which are typically trained continuously using features produced by hundreds of real-time processing pipelines or derived from other upstream models. When a failure occurs in any of these pipelines or an instability in any of the upstream models, it can cause feature corruption, leading to prediction anomalies and training data corruption. The paper demonstrates how temporal shifts in global feature importance distributions can effectively isolate the cause of a prediction anomaly, outperforming model-to-feature correlation methods. The technique is model-agnostic, cost-effective, and effective for monitoring complex data pipelines in production. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper uses special computer science to understand why machine learning models that show people ads are sometimes wrong. These models learn from lots of data and can be affected by small mistakes. The researchers came up with a new way to figure out what’s going on when the model makes a mistake. They tested it and found that it works well even when they don’t have much information. This new technique could help keep people from seeing wrong ads. |
Keywords
* Artificial intelligence * Machine learning