Loading Now

Summary of Showing Many Labels in Multi-label Classification Models: An Empirical Study Of Adversarial Examples, by Yujiang Liu et al.


Showing Many Labels in Multi-label Classification Models: An Empirical Study of Adversarial Examples

by Yujiang Liu, Wenjian Luo, Zhijian Chen, Muhammad Luqman Naseem

First submitted to arxiv on: 26 Sep 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers investigate the susceptibility of Deep Neural Networks (DNNs) to adversarial examples in the multi-label domain. Specifically, they introduce a novel attack type called “Showing Many Labels”, which aims to maximize the number of labels included in the classifier’s prediction results. The authors adapt nine attack algorithms from the multi-class environment to evaluate their performance under this new attack scenario. They test these attacks on four popular multi-label datasets using two target models: ML-LIW and ML-GCN. The results show that iterative attacks perform better than one-step attacks, and surprisingly, it is possible to successfully show all labels in the dataset.
Low GrooveSquid.com (original content) Low Difficulty Summary
Multi-label adversarial examples can affect Deep Neural Networks (DNNs), making them less accurate. A new type of attack called “Showing Many Labels” tries to make the DNN predict many labels at once. Researchers tested nine different attack algorithms on four big datasets. They used two kinds of models: ML-LIW and ML-GCN. The results show that some attacks work better than others, and amazingly, it’s possible to trick the model into guessing all the right labels.

Keywords

» Artificial intelligence  » Gcn