Loading Now

Summary of Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets, by Eleni Straitouri et al.


Controlling Counterfactual Harm in Decision Support Systems Based on Prediction Sets

by Eleni Straitouri, Suhas Thejaswi, Manuel Gomez Rodriguez

First submitted to arxiv on: 10 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY); Human-Computer Interaction (cs.HC); Methodology (stat.ME)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to designing decision support systems (DSS) that use prediction sets to aid humans in multiclass classification tasks. The authors recognize that while DSS can improve average accuracy, they may restrict human agency and cause harm if a human who succeeded without the system would have failed with it. To address this issue, the paper develops a theoretical framework using structural causal models to estimate how frequently such harm may occur. The authors show that under certain assumptions, their approach can bound the frequency of harm-causing instances using only human predictions. They then introduce a computational framework for designing DSS that guarantee harm less frequently than a specified value using conformal risk control. The framework is validated with real human prediction data from two studies, revealing a trade-off between accuracy and counterfactual harm.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us make better decisions by creating systems that help people predict things correctly. Sometimes these systems can actually be bad news because they might stop people from making their own good choices. The authors want to find ways to make sure these systems don’t cause harm too often. They use special math and computer tools to figure out how frequently this harm might happen, based only on what people predict without the system. They show that by using these tools, we can create systems that are less likely to cause harm than usual. The authors tested their ideas with real human predictions and found that there’s a trade-off between being accurate and not causing too much harm.

Keywords

» Artificial intelligence  » Classification