Loading Now

Summary of Fairness and Performance in Harmony: Data Debiasing Is All You Need, by Junhua Liu and Wendy Wan Yee Hui and Roy Ka-wei Lee and Kwan Hui Lim


Fairness And Performance In Harmony: Data Debiasing Is All You Need

by Junhua Liu, Wendy Wan Yee Hui, Roy Ka-Wei Lee, Kwan Hui Lim

First submitted to arxiv on: 26 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Information Retrieval (cs.IR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This study explores the intersection of machine learning (ML) and human decision-making, examining how biases in both domains impact fairness. The researchers utilize three ML models (XGB, Bi-LSTM, KNN) to analyze a real-world university admission dataset containing 870 profiles. They encode textual features using BERT embeddings and evaluate individual fairness by assessing consistency among experts with diverse backgrounds and ML models. The results indicate that ML models outperform humans in fairness, achieving a 14.08% to 18.79% improvement. For group fairness, the authors propose a gender-debiasing pipeline, demonstrating its effectiveness in removing gender-specific language without compromising prediction performance. The study concludes that fairness and performance can coexist, advocating for a hybrid approach combining human judgment and ML models.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine learning models are used to make decisions about who gets accepted into universities. But these models can be biased, just like humans. This study looks at how well machine learning models do in making fair decisions compared to humans. The researchers use three different machine learning models to analyze a big dataset of 870 university applicants. They also compare the consistency of human experts’ decisions with the machine learning models. Surprisingly, the machine learning models are more fair than humans! Additionally, the study shows that it’s possible to remove bias from language without making the predictions worse.

Keywords

» Artificial intelligence  » Bert  » Lstm  » Machine learning