Loading Now

Summary of Assessing Group Fairness with Social Welfare Optimization, by Violet Chen et al.


Assessing Group Fairness with Social Welfare Optimization

by Violet Chen, J. N. Hooker, Derek Leben

First submitted to arxiv on: 19 May 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Computers and Society (cs.CY); Computer Science and Game Theory (cs.GT)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper challenges traditional statistical parity metrics used to ensure fairness in Artificial Intelligence (AI) by highlighting their limitations. Specifically, these metrics disregard the actual welfare consequences of decisions and may fail to achieve the desired level of fairness for disadvantaged groups. The authors argue that a broader conception of social justice based on optimizing a Social Welfare Function (SWF) can provide a more comprehensive framework for assessing various definitions of parity. They focus on the well-known alpha fairness SWF, which has been defended by axiomatic and bargaining arguments over 70 years. The results suggest that optimization theory can shed light on the intensely discussed question of how to achieve group fairness in AI.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper questions traditional ways to ensure fairness in AI by pointing out their flaws. Current methods ignore how decisions affect people’s lives and may not help those who need it most. The authors look at a different approach called Social Welfare Function (SWF) that tries to make decisions fairer. They focus on one type of SWF, alpha fairness, which has been used for 70 years. The results show that this method can sometimes achieve equal treatment or equal chances, but often requires something different. It also shows that another popular method, predictive rate parity, isn’t very useful.

Keywords

» Artificial intelligence  » Optimization