Summary of Graph Fairness Learning Under Distribution Shifts, by Yibo Li et al.
Graph Fairness Learning under Distribution Shifts
by Yibo Li, Xiao Wang, Yujie Xing, Shaohua Fan, Ruijia Wang, Yaoqi Liu, Chuan Shi
First submitted to arxiv on: 30 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Social and Information Networks (cs.SI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Graph neural networks (GNNs) have achieved impressive results on graph-structured data but may inherit biases from training data, leading to discriminatory predictions. Researchers have explored ensuring fairness on GNNs under the assumption that training and testing data are from the same distribution. However, this paper investigates how distribution shifts affect graph fairness learning and whether performance decreases. To address these open questions, the authors identify factors determining bias on a graph and explore factors influencing fairness on testing graphs. They propose a framework, FatraGNN, to guarantee fairness on unknown testing graphs by minimizing representation distances between training and generated graphs with significant bias. This enables the model to achieve high accuracy and fairness even on biased testing graphs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making sure graph neural networks (GNNs) are fair and don’t make discriminatory predictions based on things like gender or race. Currently, GNNs can be biased if they’re trained on data that’s not representative of the real world. The researchers want to know how this bias affects the performance of the model when it’s tested on new data that might be different from what it was trained on. They propose a way to make sure the model is fair and accurate even when it encounters new, biased data. |