Loading Now

Summary of Learning Fair Robustness Via Domain Mixup, by Meiyu Zhong et al.


Learning Fair Robustness via Domain Mixup

by Meiyu Zhong, Ravi Tandon

First submitted to arxiv on: 21 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Adversarial training is a prominent technique for developing robust classifiers against attacks. However, recent studies have discovered that this method does not necessarily provide equal levels of robustness for all classes. To address this issue, we propose the use of mixup to learn fair robust classifiers that offer similar robustness across all classes. By mixing inputs from the same class and performing adversarial training on these mixed-up inputs, our approach provably reduces the disparity in class-wise robustness. This method not only improves the robustness against natural risks but also against adversarial attacks. Our theoretical analysis focuses on linear classifiers and demonstrates that mixup combined with adversarial training can effectively minimize class-wise disparities. We also provide experimental results on synthetic data and the real-world CIFAR-10 dataset, which shows significant improvements in class-wise disparities for both natural and adversarial risks.
Low GrooveSquid.com (original content) Low Difficulty Summary
Adversarial training is a way to make computer models more secure against fake or manipulated inputs. However, researchers have found that this method doesn’t always work equally well for all classes of data. To fix this problem, we propose using a technique called mixup. Mixup combines similar types of data together and trains the model on these mixed-up examples. Our approach makes it possible to develop more fair and robust models that perform similarly well for all classes of data. We tested our method with both fake and real-world data and found significant improvements in performance.

Keywords

» Artificial intelligence  » Synthetic data