Loading Now

Summary of Learning Social Fairness Preferences From Non-expert Stakeholder Opinions in Kidney Placement, by Mukund Telukunta et al.


Learning Social Fairness Preferences from Non-Expert Stakeholder Opinions in Kidney Placement

by Mukund Telukunta, Sukruth Rao, Gabriella Stickney, Venkata Sriram Siddardh Nadendla, Casey Canfield

First submitted to arxiv on: 4 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses social biases in kidney placement recommendation systems, which are critical in modern organ transplantation. Existing methods for algorithmic fairness in this domain replace true outcomes with surgeons’ decisions due to delays in recording outcomes. However, this approach disregards expert stakeholders’ biases and public opinions from non-medical stakeholders. To alleviate this concern, the authors design a novel fairness feedback survey to evaluate an acceptance rate predictor (ARP) that predicts kidney acceptance rates in match pairs. The survey is conducted on Prolific, collecting opinions from 85 anonymous participants. A social fairness preference learning algorithm is proposed based on minimizing social feedback regret using a logit-based fairness feedback model. Both the algorithm and model are validated through simulation experiments and Prolific data. Public preferences for group fairness notions in kidney placement are estimated and discussed.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper makes sure that computer systems recommending where to put kidneys are fair. Right now, these systems can be biased because they were trained on old data. The authors want to fix this by asking people what they think is fair. They created a special survey to do this and got opinions from 85 people who didn’t know each other. They also came up with a new way for computers to learn what fairness means based on how people answer the survey questions. This helps make sure that the computer systems are treating everyone equally.

Keywords

* Artificial intelligence