Loading Now

Summary of Using Self-supervised Learning Can Improve Model Fairness, by Sofia Yfantidou et al.


Using Self-supervised Learning Can Improve Model Fairness

by Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This study investigates the impact of self-supervised learning (SSL) on machine learning fairness, exploring whether pre-training and fine-tuning strategies can reduce bias in large models. The authors introduce a five-stage framework for assessing SSL’s fairness, including dataset requirements, pre-training, fine-tuning, representation similarity analysis, and domain-specific evaluation. They evaluate their method on three human-centric datasets (MIMIC, MESA, and GLOBEM), comparing hundreds of SSL and fine-tuned models on various dimensions. The findings show that SSL can significantly improve model fairness while maintaining performance, with up to a 30% increase in fairness at minimal loss in performance through self-supervision. This difference is attributed to representation dissimilarities between the best- and worst-performing demographics across models.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how machine learning can be fairer. They try to figure out if making big models learn on their own (without labels) helps get rid of biases in predictions. The authors create a special framework to test this, looking at five steps: getting the right data, pre-training, fine-tuning, checking similarities between groups, and evaluating results for each group separately. They use three real-life datasets (about health, medical tests, and global issues) to see how well their method works. The study shows that making big models learn on their own can actually make them more fair, with a small loss in performance.

Keywords

» Artificial intelligence  » Fine tuning  » Machine learning  » Self supervised