Loading Now

Summary of Bayes-optimal Classifiers Under Group Fairness, by Xianli Zeng and Edgar Dobriban and Guang Cheng


Bayes-Optimal Classifiers under Group Fairness

by Xianli Zeng, Edgar Dobriban, Guang Cheng

First submitted to arxiv on: 20 Feb 2022

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a unified framework for deriving Bayes-optimal classifiers under group fairness constraints in machine learning algorithms. By applying the classical Neyman-Pearson argument, the authors provide a method to control disparity and achieve an optimal fairness-accuracy tradeoff. The proposed FairBayes algorithm is supported by thorough experiments.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine learning is being used more often to make important decisions, like those about social welfare. To make sure these predictions don’t unfairly affect certain groups, many people have been working on making machine learning fairer. One big problem has been figuring out how to design the best algorithms for fairness. This paper solves this problem by using a classic idea from statistics called the Neyman-Pearson argument. It shows how to create an algorithm that is both fair and good at making predictions.

Keywords

* Artificial intelligence  * Machine learning