Loading Now

Summary of Auditing and Enforcing Conditional Fairness Via Optimal Transport, by Mohsen Ghassemi et al.


Auditing and Enforcing Conditional Fairness via Optimal Transport

by Mohsen Ghassemi, Alan Mishler, Niccolo Dalmasso, Luhao Zhang, Vamsi K. Potluru, Tucker Balch, Manuela Veloso

First submitted to arxiv on: 17 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper introduces Conditional Demographic Parity (CDP) as a measure of demographic parity for predictive models when conditioning on additional features. Many algorithmic fairness techniques aim to achieve demographic parity, but CDP is challenging to attain, especially with many levels in the conditioning variable or continuous model outputs. The authors propose novel measures of Conditional Demographic Disparity (CDD) using statistical distances from optimal transport literature and design regularization-based approaches to target CDP. The methods, fairbit and fairlp, allow for CDP even when the conditioning variable has many levels. For continuous model outputs, the approach targets full equality of conditional distributions, unlike other methods that only consider first moments or proxy quantities. The paper evaluates the efficacy of these approaches on real-world datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
The proposed paper tries to make sure AI models are fair and don’t treat people unfairly because of their race, gender, or other personal characteristics. This is a big problem because many AI models are not designed to be fair, which can lead to unfair outcomes. The authors suggest new ways to measure how well an AI model treats different groups of people fairly, even when the model has to make predictions based on lots of information about each person. They also propose new methods to help AI models treat different groups fairly by adjusting the way they make predictions. This could be very important for making sure AI models are used in a way that is fair and respectful.

Keywords

» Artificial intelligence  » Regularization