Loading Now

Summary of Is Adversarial Training with Compressed Datasets Effective?, by Tong Chen et al.


Is Adversarial Training with Compressed Datasets Effective?

by Tong Chen, Raghavendra Selvan

First submitted to arxiv on: 8 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the intersection of dataset compression and adversarial robustness in machine learning. Recent advancements in dataset condensation (DC) have focused primarily on achieving high test performance with limited data, neglecting the crucial aspect of robustness against adversarial attacks. The authors investigate how DC methods affect the robustness of models trained on compressed datasets, revealing that these methods do not effectively transfer robustness to models. To address this limitation, the researchers propose a novel robustness-aware dataset compression method based on finding the Minimal Finite Covering (MFC) of the dataset. This approach offers several benefits, including one-time computation, applicability for any model, and provable robustness by minimizing the generalized adversarial loss. The proposed method outperforms DC methods in terms of both robustness and performance trade-off, as demonstrated through empirical evaluations on three datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how to make machine learning models more resistant to fake or misleading data. Some researchers have been working on ways to shrink big datasets into smaller ones while keeping the important information. However, these methods haven’t focused much on making sure the model is also good at handling fake data. The authors of this paper wanted to see if these compression methods would help make models more robust against attacks. They found that they don’t really improve robustness, and then came up with a new way to compress datasets that also makes them better at handling fake data. This approach has some nice features, like being able to do it just once and working for any model. The authors tested their idea on three different datasets and showed that it does a good job of balancing how well the model works and how well it can handle fake data.

Keywords

* Artificial intelligence  * Machine learning