Loading Now

Summary of Fairness Risks For Group-conditionally Missing Demographics, by Kaiqi Jiang et al.


Fairness Risks for Group-conditionally Missing Demographics

by Kaiqi Jiang, Wenzhe Fan, Mao Li, Xinhua Zhang

First submitted to arxiv on: 20 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to fairness-aware classification models that can handle sensitive features without requiring full knowledge of them. The existing methods are impractical due to privacy, legal issues, and individual fears of discrimination. To address this challenge, the authors develop a model that augments general fairness risks with probabilistic imputations of the sensitive features while jointly learning the group-conditionally missing probabilities in a variational auto-encoder. Experimental results on image and tabular datasets demonstrate improved balance between accuracy and fairness.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps make sure AI models are fair by not asking people to share private information. Right now, many models need all the sensitive details, which is not realistic. People might be afraid to share their age or other personal things because of discrimination concerns. The authors create a new way to deal with this problem by guessing (kind of) what these sensitive features are based on patterns in data. This approach works well for both pictures and numbers, achieving better balance between being right and being fair.

Keywords

* Artificial intelligence  * Classification  * Encoder