Summary of Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack, by Zhongliang Guo et al.
Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack
by Zhongliang Guo, Junhao Dong, Yifei Qian, Kaixuan Wang, Weiye Li, Ziheng Guo, Yuheng Wang, Yanli Li, Ognjen Arandjelović, Lei Fang
First submitted to arxiv on: 18 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Cryptography and Security (cs.CR); Machine Learning (cs.LG); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Locally Adaptive Adversarial Color Attack (LAACA) method empowers artists to protect their artwork from unauthorized neural style transfer (NST) by introducing frequency-adaptive perturbations that degrade NST generation quality while maintaining visual similarity. This approach addresses concerns about artist rights and motivates the development of proactive protection methods. The LAACA method leverages insights into human visual perception and the role of different frequency components to strategically introduce perturbations in the image, making it more challenging for potential infringers to use protected artworks. The Adversarial Color Distance Metric (ACDM) is also proposed to comprehensively assess color-mattered tasks, such as NST-generated images. Experimental results demonstrate that LAACA effectively hinders unauthorized NST and ACDM accurately measures color differences. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Artists’ work can be stolen by using neural style transfer (NST). To protect their artwork, a new method called Locally Adaptive Adversarial Color Attack (LAACA) is developed. This method makes it hard for others to use the protected artworks. LAACA works by adding special changes to the image that make NST worse while keeping the original image looking similar. This helps artists keep control of their work. A new way to measure how well color-mattered tasks are done, called Adversarial Color Distance Metric (ACDM), is also proposed. |
Keywords
* Artificial intelligence * Style transfer