Loading Now

Summary of A Large-scale Empirical Study on Improving the Fairness Of Image Classification Models, by Junjie Yang et al.


A Large-Scale Empirical Study on Improving the Fairness of Image Classification Models

by Junjie Yang, Jiajun Jiang, Zeyu Sun, Junjie Chen

First submitted to arxiv on: 8 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the crucial issue of fairness in deep learning models, which hinders their adoption in real-world scenarios. To address this challenge, the authors conduct a comprehensive empirical study comparing 13 state-of-the-art fairness-improving techniques across three image classification datasets and five performance metrics. The results show significant variations in method performance depending on dataset and sensitive attributes, indicating overfitting to specific datasets. Furthermore, different evaluation metrics yield distinct assessment results, highlighting the importance of considering multiple perspectives. The study reveals that pre-processing methods and in-processing methods outperform post-processing methods, with pre-processing methods achieving the best overall performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is all about making sure AI models are fair and don’t discriminate against certain groups. Right now, many different ways to make models fair have been tried, but no one has compared them all at once. That’s what this study does – it looks at 13 different methods for improving fairness in deep learning models, using three big image datasets and five ways to measure how well they work. The results show that each method works better or worse depending on the dataset and who it’s trying to be fair to. It also shows that some methods are better than others at making sure AI is fair. Overall, this study helps us understand what makes a fairness-boosting method good or bad and will help researchers come up with even better solutions.

Keywords

* Artificial intelligence  * Boosting  * Deep learning  * Image classification  * Overfitting