Loading Now

Summary of Utility-fairness Trade-offs and How to Find Them, by Sepehr Dehdashtian et al.


Utility-Fairness Trade-Offs and How to Find Them

by Sepehr Dehdashtian, Bashir Sadeghi, Vishnu Naresh Boddeti

First submitted to arxiv on: 15 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Computers and Society (cs.CY); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a comprehensive framework for building classification systems with demographic fairness considerations, focusing on optimizing both utility and fairness. The authors introduce two utility-fairness trade-offs: the Data-Space and Label-Space Trade-off, which reveal three regions within the utility-fairness plane, indicating what is fully possible, partially possible, or impossible to achieve. They also propose U-FaTE, a method for numerically quantifying these trade-offs from data samples. The framework is evaluated on over 1000 pre-trained models across multiple datasets and prediction tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper helps us understand how to build fair classification systems that balance what’s useful with what’s fair. It shows two ways to think about this balance, which can help us figure out what we can achieve. The authors also developed a new way to measure fairness in representations. They tested many different models and found that most current approaches are not as good at balancing fairness and utility as they could be.

Keywords

» Artificial intelligence  » Classification