Summary of Adversarial Purification For No-reference Image-quality Metrics: Applicability Study and New Methods, by Aleksandr Gushchin et al.
Adversarial purification for no-reference image-quality metrics: applicability study and new methods
by Aleksandr Gushchin, Anna Chistyakova, Vladislav Minashkin, Anastasia Antsiferova, Dmitriy Vatolin
First submitted to arxiv on: 10 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the application of adversarial purification defenses from image classifiers to image quality assessment (IQA) methods. It explores the effectiveness of various preprocessing techniques, including geometrical transformations, compression, denoising, and neural network-based methods, against attacks on IQA models. The study proposes methods for evaluating the success of these defenses and assesses their performance using three IQA metrics: Linearity, MetaIQA, and SPAQ. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The researchers aim to address the gap in defensive techniques for IQA methods, which are crucial for measuring image quality. By applying various attacks on IQA models and testing different defense strategies, this study aims to provide a comprehensive understanding of how effective these defenses are in neutralizing attacks. |
Keywords
» Artificial intelligence » Neural network