Loading Now

Summary of Explaining Probabilistic Models with Distributional Values, by Luca Franceschi et al.


Explaining Probabilistic Models with Distributional Values

by Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger

First submitted to arxiv on: 15 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles a significant gap in explainable machine learning by addressing the mismatch between what we want to explain (e.g., classifier outputs) and what current methods like SHAP actually explain (e.g., class probabilities). To bridge this gap, the authors generalize cooperative game theory and value operators for probabilistic models. They introduce distributional values, random variables that track changes in model output, and derive their expressions for Gaussian, Bernoulli, and Categorical payoffs. The framework provides fine-grained explanations with case studies on vision and language models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand why machines make certain decisions. Right now, we have methods like SHAP that explain what’s going to happen, but they don’t always give the right answers. This research fills a big gap by making it possible to explain why machines are more likely to say something is one way rather than another. The scientists developed new ideas based on game theory and showed how these ideas can be used for different types of models. They tested their approach with examples from computer vision and language processing, and the results were very insightful.

Keywords

* Artificial intelligence  * Machine learning