Loading Now

Summary of Dafa: Distance-aware Fair Adversarial Training, by Hyungyu Lee et al.


DAFA: Distance-Aware Fair Adversarial Training

by Hyungyu Lee, Saehyung Lee, Hyemi Jang, Junsung Park, Ho Bae, Sungroh Yoon

First submitted to arxiv on: 23 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The robust fairness problem arises when standard training disparities are amplified during adversarial training. Existing approaches improve performance on harder classes by sacrificing easier ones, but we find that under attacks, models predict similar classes rather than easy ones. We demonstrate through analysis that robust fairness worsens as class distance decreases. To address this, we introduce DAFA, a methodology that assigns distinct loss weights and margins to each class, adjusting them for trade-offs among similar classes. Our experiments across various datasets show that DAFA not only maintains average robust accuracy but also significantly improves worst robust accuracy, achieving better robust fairness.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about making sure machines are fair when they’re trying to be good at resisting attacks. Right now, these machines do better on some things than others, which isn’t fair. The researchers found that when bad guys try to trick the machine, it gets even worse and starts predicting things that aren’t actually easy or hard. They also discovered that if classes are similar, the machine’s fairness gets worse. To fix this, they created a new way of training called DAFA, which helps the machine be fair by making it pay attention to how similar classes are. This makes the machine better at being good and not just good at one thing.

Keywords

* Artificial intelligence  * Attention