Loading Now

Summary of Boosting Adversarial Transferability Across Model Genus by Deformation-constrained Warping, By Qinliang Lin et al.


Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping

by Qinliang Lin, Cheng Luo, Zenghao Niu, Xilin He, Weicheng Xie, Yuanbo Hou, Linlin Shen, Siyang Song

First submitted to arxiv on: 6 Feb 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel attacking strategy called Deformation-Constrained Warping Attack (DeCoWA) to enhance the transferability of adversarial examples across different model genera. The approach, which combines input transformation and model augmentation, can effectively attack systems with different model genres from the surrogate model. Specifically, DeCoWA uses an elastic deformation method called Deformation-Constrained Warping (DeCoW) to generate rich local details in the augmented input, while constraining the warping transformation’s strength and direction using an adaptive control strategy. The paper demonstrates the effectiveness of DeCoWA on various tasks, including image classification, video action recognition, and audio recognition.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study creates a new way to make fake data that can trick AI models from different types. It combines two techniques: changing the input data and adjusting the model itself. This helps create fake data that works well against both CNN (Convolutional Neural Network) and Transformer models. The researchers tested this method on various tasks, like recognizing images, videos, and audio. The results show that their approach can significantly hurt the performance of these AI models.

Keywords

» Artificial intelligence  » Cnn  » Image classification  » Neural network  » Transferability  » Transformer