Summary of On the Inductive Biases Of Demographic Parity-based Fair Learning Algorithms, by Haoyu Lei et al.
On the Inductive Biases of Demographic Parity-based Fair Learning Algorithms
by Haoyu Lei, Amin Gohari, Farzan Farnia
First submitted to arxiv on: 28 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Information Theory (cs.IT)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this research paper, the authors investigate the impact of demographic parity (DP) regularization methods on supervised machine learning algorithms that aim to assign labels with minimal dependence on sensitive attributes. They analytically study how standard DP-based methods affect the conditional distribution of predicted labels given sensitive attributes and find that imbalanced training datasets can lead to biased classification rules. To mitigate these biases, the authors propose a novel method called Sensitive Attribute-Based Distributionally Robust Optimization (SA-DRO) that improves robustness against marginal distributions of sensitive attributes. The paper presents numerical results demonstrating the effectiveness of DP-based learning methods for centralized and distributed problems. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research is about making sure machine learning algorithms don’t make unfair decisions based on certain characteristics like race or gender. The authors looked at how a popular method called demographic parity (DP) affects what these algorithms learn from data. They found that when the training data has an imbalance, DP can actually make the algorithm more biased towards one group over another. To fix this problem, the authors developed a new way to optimize the algorithm called SA-DRO that helps it be more fair and robust. |
Keywords
* Artificial intelligence * Classification * Machine learning * Optimization * Regularization * Supervised