Loading Now

Summary of Distributionally Generative Augmentation For Fair Facial Attribute Classification, by Fengda Zhang et al.


Distributionally Generative Augmentation for Fair Facial Attribute Classification

by Fengda Zhang, Qianpei He, Kun Kuang, Jiashuo Liu, Long Chen, Chao Wu, Jun Xiao, Hanwang Zhang

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework tackles the issue of unfairness in Facial Attribute Classification (FAC) models, which can be trained to exhibit accuracy inconsistencies across different data subpopulations. The problem arises from bias in the training data, where some spurious attributes statistically correlate with the target attribute. Existing fairness-aware methods rely on labels for these spurious attributes, which may not be available in practice. To address this, a novel two-stage framework is presented that trains a fair FAC model on biased data without additional annotation. The first stage uses generative models to identify potential spurious attributes and enhances interpretability by visualizing them in image space. In the second stage, for each image, the spurious attributes are edited with random degrees sampled from a uniform distribution, while keeping the target attribute unchanged. Then, a fair FAC model is trained by fostering model invariance to these augmentations. The approach is evaluated on three common datasets and demonstrates effective promotion of fairness without compromising accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
Facial Attribute Classification (FAC) can be used for many applications, but it’s important to make sure the models are fair. This means they shouldn’t have different accuracies when looking at different groups of people. Right now, some methods try to fix this by using extra information about what makes people look different, but that might not always be available. Instead, a new approach uses two stages to train a fair FAC model without needing more data. The first stage looks for patterns in the images that could be making the model unfair. Then, it changes these patterns randomly so the model doesn’t rely on them. This helps make sure the model is treating everyone fairly.

Keywords

* Artificial intelligence  * Classification