Loading Now

Summary of Intersectional Unfairness Discovery, by Gezheng Xu et al.


Intersectional Unfairness Discovery

by Gezheng Xu, Qi Chen, Charles Ling, Boyu Wang, Changjian Shui

First submitted to arxiv on: 31 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the problem of bias in artificial intelligence (AI) systems, specifically focusing on intersectional fairness under multiple sensitive attributes. Current research often neglects this aspect and only considers single-sensitive-attribute subgroups. The proposed Bias-Guided Generative Network (BGGN) efficiently generates high-bias intersectional sensitive attributes by treating each bias value as a reward. Experiments on real-world text and image datasets demonstrate the effectiveness of BGGN in discovering diverse and biased data. To further evaluate the generated data, the paper formulates prompts and uses generative AI to produce new texts and images, revealing potential unfairness in modern generative AI systems.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research looks at how artificial intelligence (AI) can be unfair to certain groups of people. Right now, most studies only look at one type of bias, but this paper wants to understand what happens when there are multiple biases mixed together. The team created a new way called the Bias-Guided Generative Network (BGGN) that can find these biased groups efficiently. They tested BGGN on real pictures and text and found it worked well. To make sure their results were accurate, they used AI to generate more data that could be unfair, showing that some popular AI systems might not be as fair as we think.

Keywords

» Artificial intelligence