Loading Now

Summary of “patriarchy Hurts Men Too.” Does Your Model Agree? a Discussion on Fairness Assumptions, by Marco Favier and Toon Calders


“Patriarchy Hurts Men Too.” Does Your Model Agree? A Discussion on Fairness Assumptions

by Marco Favier, Toon Calders

First submitted to arxiv on: 1 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper discusses the limitations of the traditional approach to fairness in machine learning, which typically involves selecting a fairness measure, choosing a model that minimizes it, and maximizing performance. The authors argue that this approach often relies on implicit assumptions about how bias is introduced into the data, which can be problematic. They demonstrate that several common fairness measures are based on these assumptions, and formally prove their claims regarding the implications of these assumptions. The paper concludes that either the biasing process is more complex than previously thought, or that many developed models are unnecessary.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how we try to make sure machine learning models are fair. Usually, we choose a way to measure fairness, pick a model that does well on that measure, and then try to make it perform well overall. But the authors say this approach has some problems because it relies on making certain assumptions about how bias gets into our data. They show that many common ways to measure fairness are actually based on these assumptions, and prove that they’re right. This means we might need to rethink how we develop models if we want them to handle more complicated situations.

Keywords

» Artificial intelligence  » Machine learning