Loading Now

Summary of On the Adversarial Risk Of Test Time Adaptation: An Investigation Into Realistic Test-time Data Poisoning, by Yongyi Su and Yushu Li and Nanqing Liu and Kui Jia and Xulei Yang and Chuan-sheng Foo and Xun Xu


On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning

by Yongyi Su, Yushu Li, Nanqing Liu, Kui Jia, Xulei Yang, Chuan-Sheng Foo, Xun Xu

First submitted to arxiv on: 7 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the efficacy of test-time adaptation (TTA) in machine learning models, specifically exploring its vulnerability to adversarial attacks. TTA updates model weights during inference using testing data, but this can expose the model to crafted adversarial samples. The authors review realistic assumptions for test-time data poisoning and propose a new attack method that generates poisoned samples without access to benign data. They also design two TTA-aware attack objectives and evaluate existing attack methods, finding that TTA methods are more robust than previously believed. Furthermore, they analyze effective defense strategies to develop adversarially robust TTA methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making machine learning models safer by stopping bad guys from tricking them during testing time. It looks at how attackers can make test-time adaptation (TTA) worse by giving it fake data that’s meant to be bad. The researchers then create a new way for attackers to create this fake data without having the good data too. They also try out different ways of attacking TTA and find that some models are better than others at staying safe. Finally, they talk about what can be done to make TTA even safer.

Keywords

» Artificial intelligence  » Inference  » Machine learning