Summary of Fairrr: Pre-processing For Group Fairness Through Randomized Response, by Xianli Zeng et al.
FairRR: Pre-Processing for Group Fairness through Randomized Response
by Xianli Zeng, Joshua Ward, Guang Cheng
First submitted to arxiv on: 12 Mar 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach to achieving group fairness in machine learning models by formulating it as an optimization problem in the pre-processing domain. Building on previous work on in-processing and post-processing fairness, the authors show that optimal design matrices can be used to modify response variables in a Randomized Response framework. The proposed algorithm, FairRR, is demonstrated to achieve excellent downstream model utility while controlling for group fairness measures. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper tries to make machine learning models fairer by changing how we prepare data before using it. Think of it like editing pictures before showing them to others – you can adjust brightness and contrast to make the picture look better without changing what’s actually in the picture. The authors show that this idea, called FairRR, helps make sure the model is fair and works well. |
Keywords
* Artificial intelligence * Machine learning * Optimization