Loading Now

Summary of Tart: Boosting Clean Accuracy Through Tangent Direction Guided Adversarial Training, by Bongsoo Yi et al.


TART: Boosting Clean Accuracy Through Tangent Direction Guided Adversarial Training

by Bongsoo Yi, Rongjie Lai, Yao Li

First submitted to arxiv on: 27 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel method, Tangent Direction Guided Adversarial Training (TART), to enhance the robustness of deep neural networks against adversarial attacks while maintaining accuracy on clean data. The authors argue that existing adversarial defense algorithms significantly alter the decision boundary and hurt accuracy when trained with adversarial examples having large normal components. TART mitigates this issue by estimating the tangent direction of adversarial examples and allocating an adaptive perturbation limit according to their tangential component’s norm. The results demonstrate that TART consistently boosts clean accuracy while retaining a high level of robustness against adversarial attacks, outperforming existing methods on both simulated and benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making computer vision models more resistant to fake images. Right now, there are ways to make these models stronger, but they also make the model less accurate when dealing with real pictures. The researchers found that this problem happens because the methods for making models stronger change how the model makes decisions. They developed a new way called TART (Tangent Direction Guided Adversarial Training) that helps the model stay accurate and strong at the same time. By using special directions to guide the training, TART is able to make models that are both robust and accurate.

Keywords

* Artificial intelligence