Loading Now

Summary of Mask-based Invisible Backdoor Attacks on Object Detection, by Jeongjin Shin


Mask-based Invisible Backdoor Attacks on Object Detection

by Jeongjin Shin

First submitted to arxiv on: 20 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Deep learning models have achieved impressive results in object detection, but they’re vulnerable to backdoor attacks. These attacks make models behave normally without a trigger, but then act maliciously when detecting a predefined trigger. While there’s been extensive research on backdoor attacks in image classification, the application to object detection remains relatively underexplored. Given the widespread use of object detection in critical scenarios, these vulnerabilities are particularly concerning. This study proposes an effective invisible backdoor attack on object detection using a mask-based approach, exploring three distinct attack scenarios: object disappearance, misclassification, and generation attacks. Extensive experiments were conducted to examine the effectiveness of these attacks and test defense methods to determine countermeasures.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep learning models are super good at finding things in pictures! But, they have a secret weakness – backdoor attacks can trick them into doing bad things when shown a special trigger. This is really important because object detection is used in self-driving cars, security systems, and more. The researchers behind this study created a new kind of attack that hides inside the mask of an object. They tried three different ways to do this – making objects disappear, misclassify them, or create fake ones. They wanted to see how well these attacks worked and what we can do to stop them.

Keywords

* Artificial intelligence  * Deep learning  * Image classification  * Mask  * Object detection