Loading Now

Summary of Steal Now and Attack Later: Evaluating Robustness Of Object Detection Against Black-box Adversarial Attacks, by Erh-chung Chen et al.


Steal Now and Attack Later: Evaluating Robustness of Object Detection against Black-box Adversarial Attacks

by Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee

First submitted to arxiv on: 24 Apr 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces latency attacks against object detection, which aim to increase inference time by generating ghost objects in an image. The challenge lies in generating these ghost objects without knowing the target model’s internal workings. Researchers demonstrate the feasibility of creating such attacks using “steal now, decrypt later” methods. These adversarial examples can exploit potential vulnerabilities in AI services, posing security concerns. Experimental results show that the proposed attack succeeds across various models and Google Vision API without prior knowledge about the target model, with an average cost of under $1.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about a new way to make object detection systems slower by adding fake objects to images. It’s like trying to confuse someone by adding extra things to what they’re looking at. The researchers found a way to do this without knowing the details of how the system works, which makes it harder for the system to defend against these attacks. This could be a problem because some AI systems are used in important ways, like recognizing objects in self-driving cars or medical images.

Keywords

» Artificial intelligence  » Inference  » Object detection