Summary of Proximix: Enhancing Fairness with Proximity Samples in Subgroups, by Jingyu Hu et al.
ProxiMix: Enhancing Fairness with Proximity Samples in Subgroups
by Jingyu Hu, Jun Hong, Mengnan Du, Weiru Liu
First submitted to arxiv on: 2 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel pre-processing strategy to address bias in machine learning by combining linear mixup and a new bias mitigation algorithm. The existing mixup method can still retain biases present in dataset labels, which the authors aim to improve by generating proximity-aware augmented samples using ProxiMix. They tested their approach on three datasets with three ML models and different hyperparameter settings, demonstrating its effectiveness from both fairness of predictions and fairness of recourse perspectives. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper wants to make machine learning fairer. Right now, some methods can still keep biases in the data. The authors came up with a new way to mix data together while keeping fairness in mind. They tested it on three different datasets and showed that it works well from two important angles: making predictions fairly and giving good “what if” scenarios. |
Keywords
» Artificial intelligence » Hyperparameter » Machine learning