Loading Now

Summary of Noiseattack: An Evasive Sample-specific Multi-targeted Backdoor Attack Through White Gaussian Noise, by Abdullah Arafat Miah et al.


NoiseAttack: An Evasive Sample-Specific Multi-Targeted Backdoor Attack Through White Gaussian Noise

by Abdullah Arafat Miah, Kaan Icer, Resit Sendag, Yu Bi

First submitted to arxiv on: 3 Sep 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper introduces NoiseAttack, a novel sample-specific multi-targeted backdoor attack that can generate multiple targeted classes with minimal input configuration. The authors design triggers using White Gaussian Noise (WGN) with various Power Spectral Densities (PSD), coupled with a unique training strategy to execute the backdoor attack. This work is the first of its kind to launch a vision backdoor attack for generating multiple targeted classes. The authors demonstrate the effectiveness of NoiseAttack against popular network architectures and datasets, as well as bypass state-of-the-art backdoor detection methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
NoiseAttack is a new way to hack into AI systems. It’s like adding noise to the pictures or videos that an AI uses to learn, so when it makes predictions, it makes mistakes. The attackers can make the AI predict anything they want, as long as they know the right “secret codes” to use. This is bad because it could let attackers control things like self-driving cars or medical imaging equipment. The good news is that the researchers who made NoiseAttack also showed how to detect when someone is using this kind of attack.

Keywords

* Artificial intelligence