Summary of Deferred Poisoning: Making the Model More Vulnerable Via Hessian Singularization, by Yuhao He et al.
Deferred Poisoning: Making the Model More Vulnerable via Hessian Singularization
by Yuhao He, Jinyu Tian, Xianwei Zheng, Li Dong, Yuanman Li, Jiantao Zhou
First submitted to arxiv on: 6 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper introduces a novel type of poisoning attack called Deferred Poisoning Attacks that evade traditional defenses by ensuring normal model performance during training and validation phases. The attack achieves stealthiness through a similar loss function value to normally trained models, while its large local curvature makes it vulnerable to evasion attacks or natural noise. The proposed Singularization Regularization term ensures the optimal point has singular Hessian information, enabling significant performance degradation with small perturbations. Experimental results on image classification tasks validate the effectiveness of this attack under various scenarios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper creates a new kind of problem for machine learning models called Deferred Poisoning Attacks. These attacks make it hard to detect because they don’t change how well the model performs during training and testing. Instead, they make the model very sensitive to small changes or noise, which can cause big problems. The authors propose a way to do this attack using something called Singularization Regularization. They test their idea on image recognition tasks and show that it’s effective in causing problems for models. |
Keywords
» Artificial intelligence » Image classification » Loss function » Machine learning » Regularization