Loading Now

Summary of Data Debiasing with Datamodels (d3m): Improving Subgroup Robustness Via Data Selection, by Saachi Jain et al.


Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection

by Saachi Jain, Kimia Hamidieh, Kristian Georgiev, Andrew Ilyas, Marzyeh Ghassemi, Aleksander Madry

First submitted to arxiv on: 24 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach called Data Debiasing with Datamodels (D3M) to address the issue of machine learning models failing on underrepresented subgroups. The authors aim to remove specific training examples that drive model failures, allowing for efficient training of debiased classifiers without requiring additional annotations or hyperparameter tuning. By isolating and removing these problematic examples, D3M can improve performance on minority groups while preserving most of the original dataset.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is trying to solve a big problem in machine learning. Right now, AI models often don’t work well for people who are not included in the training data. The authors came up with a new way to fix this called D3M. It helps remove the bad examples that make the model fail on minority groups. This means we can train better models without having to add lots of extra information or make complicated changes.

Keywords

» Artificial intelligence  » Hyperparameter  » Machine learning