Loading Now

Summary of Towards Robust Object Detection: Identifying and Removing Backdoors Via Module Inconsistency Analysis, by Xianda Zhang and Siyuan Liang


Towards Robust Object Detection: Identifying and Removing Backdoors via Module Inconsistency Analysis

by Xianda Zhang, Siyuan Liang

First submitted to arxiv on: 24 Sep 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Object detection models are vulnerable to targeted misclassifications when triggered by specific patterns. Existing defense techniques fail to effectively detect and remove backdoors in object detectors. We propose a tailored framework for detecting inconsistencies between local modules’ behaviors, which is the main source of backdoor behavior. Our algorithm detects backdoors by quantifying these inconsistencies and removes them by localizing the affected module, resetting its parameters, and fine-tuning the model on a small clean dataset. Our method achieves a 90% improvement in backdoor removal rate over fine-tuning baselines while limiting accuracy loss to less than 4%. This work presents the first approach that addresses both detection and removal of backdoors in two-stage object detection models.
Low GrooveSquid.com (original content) Low Difficulty Summary
Backdoor attacks in object detection models can cause problems. These models are used in security-critical applications, so it’s important to detect and remove these attacks. Existing methods don’t work well for object detectors, so we came up with a new way to do this. We looked at how different parts of the model behave when they see certain patterns. This helped us develop an algorithm to find backdoors and get rid of them. Our method is better than existing ones at finding and removing backdoors in object detection models.

Keywords

» Artificial intelligence  » Fine tuning  » Object detection