Loading Now

Summary of Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks For Tabular Data, by Thibault Simonetto et al.


Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data

by Thibault Simonetto, Salah Ghamizi, Maxime Cordy

First submitted to arxiv on: 2 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes two novel attacks, CAPGD and CAA, to evaluate the adversarial robustness of deep tabular models. CAPGD is an adaptive gradient attack that overcomes existing gradient attacks’ limitations, achieving up to 81% accuracy drop compared to previous methods. The authors also design CAA, a combination of CAPGD and MOEVA, which outperforms all existing attacks in 17 out of 20 settings, causing a maximum accuracy drop of 96.1% points. This research is significant for tabular machine learning as it sets a new benchmark for defenses and robust architectures. The authors demonstrate the effectiveness of their attacks on five architectures and four critical use cases.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep learning models are being used in industrial settings to analyze tabular data, but the paper says that these models aren’t very good at defending against attacks. There are no effective ways to test how well they can withstand bad data. The authors create two new kinds of attacks to help with this problem. One is called CAPGD and it’s an “adaptive gradient attack” that makes the model do worse by a lot, up to 81%. The other is CAA, which combines two other methods. This paper shows how well these attacks work on different models and real-world examples. It says that its new attacks are better than existing ones and should be used to test defenses.

Keywords

» Artificial intelligence  » Deep learning  » Machine learning