Summary of Positive-sum Fairness: Leveraging Demographic Attributes to Achieve Fair Ai Outcomes Without Sacrificing Group Gains, by Samia Belhadj et al.
Positive-Sum Fairness: Leveraging Demographic Attributes to Achieve Fair AI Outcomes Without Sacrificing Group Gains
by Samia Belhadj, Sanguk Park, Ambika Seth, Hesham Dar, Thijs Kooi
First submitted to arxiv on: 30 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores fairness in medical AI, recognizing that equal performance is not always sufficient. Instead, the authors argue that decreases in fairness can have varying effects depending on the type of change and how sensitive attributes are used. They introduce the concept of positive-sum fairness, which allows for increases in performance as long as individual subgroup performance is maintained. This approach enables the use of sensitive attributes correlated with disease to improve accuracy without compromising fairness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at making medical AI fairer by considering that not all changes in fairness are bad. It says some changes can be good if they don’t hurt individual groups. The authors call this “positive-sum fairness”. This means we can use information about people who have a certain condition to make the AI better without hurting other groups. |