Summary of Cherry on the Cake: Fairness Is Not An Optimization Problem, by Marco Favier et al.
Cherry on the Cake: Fairness is NOT an Optimization Problem
by Marco Favier, Toon Calders
First submitted to arxiv on: 24 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computers and Society (cs.CY); Computer Science and Game Theory (cs.GT)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the concept of “cherry-picking” in Fair AI literature, where malicious models manipulate fairness constraints to select unfair individuals. Despite satisfying fairness metrics, these models perpetuate negative impacts on marginalized communities. Contrary to assumptions, the authors show that optimizing fairness metrics can lead to cherry-picking due to optimization process constraints. To demonstrate this, they draw parallels between fair cake-cutting and supervised multi-label classification, illustrating how this connection can be applied for fairness and classification purposes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper talks about a problem in artificial intelligence called “cherry-picking”. It’s when an AI model is supposed to be fair but instead picks the wrong people. This makes it unfair and hurts already marginalized groups. People thought that if we just made sure the model was optimized for fairness, this wouldn’t happen. But the researchers found out that even with those optimizations, the model might still pick unfairly. To understand why, they compared AI to a problem called “fair cake-cutting”. They showed how this idea can help make AI models more fair and accurate. |
Keywords
» Artificial intelligence » Classification » Optimization » Supervised