Summary of Facl-attack: Frequency-aware Contrastive Learning For Transferable Adversarial Attacks, by Hunmin Yang et al.
FACL-Attack: Frequency-Aware Contrastive Learning for Transferable Adversarial Attacks
by Hunmin Yang, Jongoh Jeong, Kuk-Jin Yoon
First submitted to arxiv on: 30 Jul 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed paper explores a novel approach for generating robust adversarial examples in real-world strict black-box settings, where both the target domain and model architectures are unknown. The authors introduce two modules – Frequency-Aware Domain Randomization (FADR) and Frequency-Augmented Contrastive Learning (FACL) – to generate perturbations that exhibit strong transferability across domains and models. The FADR module randomizes frequency components while the FACL module separates domain-invariant features of clean and perturbed images. The authors demonstrate the effectiveness of their approach through extensive cross-domain and cross-model experiments, achieving efficient inference time complexity. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A group of scientists has discovered a way to make fake pictures that can trick artificial intelligence (AI) models. These fake pictures are called “adversarial examples.” While AI is very good at recognizing real images, it’s not perfect. This means that adversarial examples can be used to fool the AI and make it think something is real when it’s not. The scientists created a new way to generate these fake pictures by using a special kind of neural network. They tested their method on different types of images and AI models, and found that it worked well in most cases. |
Keywords
» Artificial intelligence » Inference » Neural network » Transferability