Loading Now

Summary of A Decision Theoretic Framework For Measuring Ai Reliance, by Ziyang Guo et al.


A Decision Theoretic Framework for Measuring AI Reliance

by Ziyang Guo, Yifan Wu, Jason Hartline, Jessica Hullman

First submitted to arxiv on: 27 Jan 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a formal definition of human reliance on artificial intelligence (AI) systems, which is crucial for achieving complementary performance. The current definition lacks statistical grounding and can lead to contradictions. The authors argue that relying on AI should be based on the probability of the human following the AI’s recommendation, separating it from challenges humans face in differentiating signals and forming accurate beliefs. This framework guides study design and interpretation, enabling researchers to separate losses due to mis-reliance from those due to inaccurate signal differentiation. The authors demonstrate this framework using recent AI-advised decision-making studies, evaluating losses by comparing to a baseline and benchmark for complementary performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about how humans work with artificial intelligence (AI) systems. Right now, we don’t have a clear way of defining when someone trusts an AI’s suggestion or not. The authors want to fix this by creating a formal definition that makes sense mathematically. They think that relying on AI means the person has a certain chance of following the AI’s advice, and that’s separate from how well they understand what’s going on. This new way of thinking helps us design better studies and see what happens when people work with AI.

Keywords

» Artificial intelligence  » Grounding  » Probability