Loading Now

Summary of Exmos: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations, by Aditya Bhattacharya et al.


EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations

by Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic, Katrien Verbert

First submitted to arxiv on: 1 Feb 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores how different types of explanations can help domain experts in healthcare improve their machine learning models by detecting and resolving potential data issues. The authors investigated the impact of global model-centric and data-centric explanations on trust, understanding, and model improvement. They conducted a mixed-methods study with 70 participants who received one of four conditions: model-centric, data-centric, or a combination of both. The results showed that while data-centric explanations improved understanding, a hybrid approach combining both types achieved the highest effectiveness. The findings have implications for designing interactive machine-learning systems that provide effective explanations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how to help experts in healthcare use machine learning models better. They want to know if certain kinds of explanations can help them fix problems with their data and make their models work better. The researchers tested four different ways of explaining things: just talking about the model, just showing what’s happening with the data, or a mix of both. They found that when they mixed it up, people understood more and were able to improve their models better.

Keywords

» Artificial intelligence  » Machine learning