Loading Now

Summary of Evaluating Fairness in Self-supervised and Supervised Models For Sequential Data, by Sofia Yfantidou et al.


Evaluating Fairness in Self-supervised and Supervised Models for Sequential Data

by Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar

First submitted to arxiv on: 3 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the impact of self-supervised learning (SSL) on fairness in machine learning models. Specifically, it explores how pre-training and fine-tuning strategies affect a model’s performance on different demographic breakdowns. The authors hypothesize that SSL models would learn more generic representations, leading to less biased results. By comparing SSL models to their supervised counterparts, the study finds that SSL can achieve comparable performance while significantly enhancing fairness. The findings demonstrate an increase in fairness of up to 27% with only a 1% loss in performance through self-supervision. This work highlights the potential of SSL in human-centric computing, particularly in high-stakes and data-scarce application domains like healthcare.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how machine learning models can learn without being told what’s right or wrong. The authors want to know if this way of learning is fairer than traditional methods. They test different ways of training the model and find that it can perform just as well while being more balanced in its results. This is important because we need machines to make good decisions without favoring certain groups over others.

Keywords

* Artificial intelligence  * Fine tuning  * Machine learning  * Self supervised  * Supervised