Loading Now

Summary of Explainability Of Machine Learning Models Under Missing Data, by Tuan L. Vo et al.


Explainability of Machine Learning Models under Missing Data

by Tuan L. Vo, Thu Nguyen, Luis M. Lopez-Ramos, Hugo L. Hammer, Michael A. Riegler, Pal Halvorsen

First submitted to arxiv on: 29 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates how different imputation methods affect the explainability of complex machine learning models using SHAP, a popular technique for model explanations. It experimentally compares various imputation strategies on feature importance, interaction, and Shapley values, revealing that the choice of imputation method can introduce biases affecting model explainability. The study also shows that a lower test prediction MSE does not necessarily imply a lower MSE in Shapley values, highlighting the need for considering imputation effects to ensure robust insights from machine learning models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper explores how missing data affects the explainability of complex machine learning models using SHAP. It compares different methods for filling in missing data and finds that the choice of method can change what features are important and how they interact. The study also shows that a model might be good at predicting outcomes but not as good at explaining why it made those predictions. This is important to know because we need reliable insights from our models.

Keywords

» Artificial intelligence  » Machine learning  » Mse