Summary of Lafa: Latent Feature Attacks on Non-negative Matrix Factorization, by Minh Vu et al.
LaFA: Latent Feature Attacks on Non-negative Matrix Factorization
by Minh Vu, Ben Nebgen, Erik Skau, Geigh Zollicoffer, Juan Castorena, Kim Rasmussen, Boian Alexandrov, Manish Bhattarai
First submitted to arxiv on: 7 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: This paper investigates the robustness of Non-negative Matrix Factorization (NMF), an unsupervised machine learning method, against adversarial attacks. While NMF is known for its resilience to such attacks, recent advances in computational tools like Pytorch have raised concerns about its reliability. The authors introduce a novel class of attacks called Latent Feature Attacks (LaFA) that manipulate the latent features produced by NMF. LaFA uses the Feature Error (FE) loss to generate perturbations in the original data, revealing vulnerabilities similar to those found in other machine learning techniques. To scale FE attacks to larger datasets, the authors develop a method based on implicit differentiation. Extensive experiments on synthetic and real-world data demonstrate NMF’s vulnerabilities and the effectiveness of LaFA. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper looks at how well an important machine learning tool called Non-negative Matrix Factorization (NMF) can withstand attempts to make it make mistakes. Normally, NMF is good at handling these kinds of attacks, but some new computer tools have made people wonder if that’s still true. The researchers came up with a new way to try to trick NMF, which they call Latent Feature Attacks (LaFA). LaFA tries to change the underlying structure of the data in ways that make it harder for NMF to work correctly. To make this work on bigger datasets, the authors figured out a way to speed things up using special math tricks. They tested their ideas on both fake and real data and found that NMF is more vulnerable than you might think. |
Keywords
» Artificial intelligence » Machine learning » Unsupervised