Loading Now

Summary of Mitigation Of Gender Bias in Automatic Facial Non-verbal Behaviors Generation, by Alice Delbosc (talep et al.


Mitigation of gender bias in automatic facial non-verbal behaviors generation

by Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stephane Ayache

First submitted to arxiv on: 9 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the issue of bias in non-verbal behavior generation for social interactive agents by examining how facial non-verbal cues are influenced by gender. The authors introduce a classifier that can accurately determine the gender of a speaker based on their gaze, head movements, and facial expressions. They also propose FairGenderGen, a new model that integrates a gender discriminator and gradient reversal layer to mitigate gender sensitivity in generated behaviors. Experiments show that the proposed classifier is no longer effective in distinguishing the gender of the speaker from the generated non-verbal behaviors.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper talks about how computers can make people’s faces look like they’re really talking or reacting, but right now, these programs often have biases because they were trained on biased data. The authors think this is a problem and want to fix it by making sure the computer-generated facial expressions don’t show any gender bias. They developed a new model that can generate facial expressions from what people are saying, and it does a good job of hiding whether someone’s male or female.

Keywords

* Artificial intelligence