Loading Now

Summary of Benchmarking the Fairness Of Image Upsampling Methods, by Mike Laszkiewicz et al.


Benchmarking the Fairness of Image Upsampling Methods

by Mike Laszkiewicz, Imant Daunhawer, Julia E. Vogt, Asja Fischer, Johannes Lederer

First submitted to arxiv on: 24 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Recent years have seen a surge in deep generative models for creating synthetic media like images and videos. While these models hold promise for practical applications, it’s crucial to evaluate their fairness. This paper introduces a comprehensive framework for benchmarking the performance and fairness of conditional generative models. The framework includes a set of metrics inspired by supervised fairness counterparts to assess model fairness and diversity. The authors focus on image upsampling, creating a benchmark that covers various modern methods. A notable subset is UnfairFace, replicating racial distributions from common large-scale face datasets. Empirical results show the importance of unbiased training sets and highlight variations in algorithmic responses to dataset imbalances. Alarmingly, none of the considered models produce statistically fair and diverse results. The provided repository allows for reproducibility.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine a world where computers can create fake images or videos that look real. This is already happening with deep learning models! But there’s a problem: these models might not be fair. For example, they could make people of certain races look worse than others. This paper wants to solve this problem by creating a way to test how fair these models are. They focus on a specific task called image upsampling and create a benchmark that shows which methods are better or worse at being fair. Surprisingly, none of the tested methods did very well in terms of fairness. The authors hope their work will help make sure computers don’t discriminate against certain groups.

Keywords

* Artificial intelligence  * Deep learning  * Supervised