Summary of Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks Via Node Injections, by Zihan Luo et al.
Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections
by Zihan Luo, Hong Huang, Yongkang Zhou, Jiping Zhang, Nuo Chen, Hai Jin
First submitted to arxiv on: 5 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty Summary: This paper investigates the vulnerabilities of Graph Neural Networks (GNNs) in graph-related tasks when facing malicious adversarial attacks. Researchers have previously revealed fairness issues in GNNs, but all existing attacks require manipulating node connectivity, which may not be realistic. To address this limitation, the authors introduce a Node Injection-based Fairness Attack (NIFA), which optimizes injected nodes’ feature matrix to undermine GNN fairness. The paper demonstrates that NIFA can significantly compromise the fairness of mainstream GNNs, including fairness-aware GNNs, by injecting just 1% of nodes on three real-world datasets. This work highlights the importance of considering GNN fairness vulnerabilities and encourages the development of defense mechanisms. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty Summary: A group of researchers found that a type of artificial intelligence called Graph Neural Networks (GNNs) can be unfair when used to analyze graphs. They discovered that existing methods for making GNNs fair were limited because they relied on changing the connections between nodes in the graph. The authors came up with a new way to make GNNs unfair by injecting fake nodes into the graph, which is more realistic than changing node connections. They tested this method and found that it can be very effective at making GNNs unfair, even if only 1% of the nodes are injected. This work shows how important it is to consider fairness when using GNNs. |
Keywords
» Artificial intelligence » Gnn