Summary of Exploring the Effect Of Explanation Content and Format on User Comprehension and Trust in Healthcare, by Antonio Rago et al.
Exploring the Effect of Explanation Content and Format on User Comprehension and Trust in Healthcare
by Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni
First submitted to arxiv on: 30 Aug 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates how AI-driven healthcare tools can be trusted by users. Specifically, it focuses on explaining the predictions made by the QCancer regression tool, a cancer risk prediction model. The authors examine how the content and format of explanations affect user comprehension and trust. They compare two types of explanations: SHAP (SHapley Additive exPlanations) and Occlusion-1, presenting them in different formats such as charts or text. The study involves experiments with two groups of stakeholders: the general public and medical students. Results show that users prefer Occlusion-1 over SHAP explanations based on content, but when controlling for format, only text-based explanations outperform chart-based ones. This suggests that the format of explanations is more important than their content. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary AI tools can help healthcare professionals make better decisions, but they need to be explained in a way that users understand and trust. A new study looks at how AI models like QCancer explain their predictions. The researchers tested two types of explanations: SHAP and Occlusion-1. They presented these explanations in different ways, such as charts or text. They asked people from the general public and medical students to participate in the study. The results show that most people prefer one type of explanation over another. This is important because it can help us make AI tools more trustworthy. |
Keywords
» Artificial intelligence » Regression