Summary of Evaluating the Adversarial Robustness Of Detection Transformers, by Amirhossein Nazeri et al.
Evaluating the Adversarial Robustness of Detection Transformers
by Amirhossein Nazeri, Chunheng Zhao, Pierluigi Pisu
First submitted to arxiv on: 25 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Robust object detection is crucial for autonomous driving and mobile robotics, where accurate detection of vehicles, pedestrians, and obstacles ensures safety. Despite advancements in DETR models, their susceptibility to adversarial attacks remains unexplored. This paper evaluates DETR and its variants under white-box and black-box attacks using MS-COCO and KITTI datasets, covering general and autonomous driving scenarios. The results demonstrate that DETR models are vulnerable to attacks, similar to traditional CNN-based detectors. Our transferability analysis reveals high intra-network transferability among DETR variants but limited cross-network transferability to CNN-based models. We also propose a novel untargeted attack for DETR, exploiting intermediate loss functions to induce misclassification with minimal perturbations. Visualizations of self-attention feature maps provide insights into how attacks affect internal representations. These findings highlight critical vulnerabilities in transformer-based object detectors under standard attacks, emphasizing the need for robustness enhancements in safety-critical applications. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Object detection is important for safe autonomous driving and robotics. This paper tests DETR models and their versions against different types of attacks using two big datasets. It shows that DETR models can be fooled by these attacks, just like older computer vision models. The results also show how well the different attacks transfer between similar models and how poorly they transfer to other model types. Additionally, the researchers propose a new way to attack DETR models that is specifically designed for them. This helps us understand how these attacks affect what the models are looking at inside. Overall, this shows that we need to make DETR-based object detectors more robust so they can be used safely. |
Keywords
» Artificial intelligence » Cnn » Object detection » Self attention » Transferability » Transformer