Loading Now

Summary of Evaluating Explanations Through Llms: Beyond Traditional User Studies, by Francesco Bombassei De Bona et al.


Evaluating Explanations Through LLMs: Beyond Traditional User Studies

by Francesco Bombassei De Bona, Gabriele Dominici, Tim Miller, Marc Langheinrich, Martin Gjoreski

First submitted to arxiv on: 23 Oct 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach is proposed for evaluating explainable AI (XAI) tools, which are crucial for building trust in healthcare and other sectors. Traditional user studies are often costly, time-consuming, and difficult to scale, making it challenging to evaluate the effectiveness of these tools. To address this issue, the authors explore the use of Large Language Models (LLMs) to replicate human participants in XAI evaluation. The study compares counterfactual and causal explanations, replicating human participants with seven LLMs under various settings. The results show that LLMs can replicate most conclusions from the original study, but different LLMs yield varying levels of alignment in the results. Additionally, experimental factors such as LLM memory and output variability affect alignment with human responses. This initial finding suggests that LLMs could provide a scalable and cost-effective way to simplify qualitative XAI evaluation.
Low GrooveSquid.com (original content) Low Difficulty Summary
XAI tools are important because they help people understand how AI makes decisions. To test these tools, we usually ask humans questions about what they think of the explanations. However, this can be expensive, time-consuming, and difficult to do with a lot of people. In this study, researchers use computers to pretend to be human participants in order to make it easier and cheaper to evaluate XAI tools. They tested different types of explanations, like counterfactual and causal, and found that computers could replicate the results from the original study. However, they also discovered that the type of computer used and how it was trained affected the accuracy of the results.

Keywords

» Artificial intelligence  » Alignment