Loading Now

Summary of Aim: Attributing, Interpreting, Mitigating Data Unfairness, by Zhining Liu et al.


AIM: Attributing, Interpreting, Mitigating Data Unfairness

by Zhining Liu, Ruizhong Qiu, Zhichen Zeng, Yada Zhu, Hendrik Hamann, Hanghang Tong

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the lack of research on tracing biases in real-world data, which is crucial for transparency and interpretability in fair machine learning (FairML). The authors propose a novel problem: discovering samples that reflect historical bias and prejudice. They develop a sample bias criterion and practical algorithms to measure and counter sample bias, providing intuitive sample-level attribution and explanation of historical bias. Two FairML strategies are designed, which can mitigate group and individual unfairness at the cost of minimal predictive utility loss. The methods are demonstrated to be effective on multiple real-world datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at a big problem in machine learning called fairness. Right now, most research focuses on making sure predictions aren’t biased against certain groups. But what about the biases that exist in the data we use to train those models? This paper tries to solve this issue by finding and explaining the biases in the data. They developed new ways to measure and correct these biases, which can help make machine learning more fair. The authors tested their methods on real-world datasets and showed they work well.

Keywords

» Artificial intelligence  » Machine learning