Summary of Mapping: Debiasing Graph Neural Networks For Fair Node Classification with Limited Sensitive Information Leakage, by Ying Song and Balaji Palanisamy
MAPPING: Debiasing Graph Neural Networks for Fair Node Classification with Limited Sensitive Information Leakage
by Ying Song, Balaji Palanisamy
First submitted to arxiv on: 23 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Graph Neural Networks (GNNs) have achieved remarkable success in various web-based applications, but they inherit and exacerbate historical discrimination and social stereotypes, hindering their deployment in high-stake domains like online clinical diagnosis and financial crediting. Current fairness research on i.i.d data cannot be replicated to non-i.i.d graph structures with topological dependence among samples. Most studies focus on in-processing techniques to enforce fairness, while a pre-processing stage debiasing framework is largely under-explored. This paper proposes MAPPING (Masking And Pruning and Message-Passing Training), a novel model-agnostic debiasing framework for fair node classification. It uses distance covariance-based fairness constraints to reduce feature and topology biases in arbitrary dimensions and combines them with adversarial debiasing to confine attribute inference attacks. Experiments on real-world datasets demonstrate the effectiveness and flexibility of MAPPING, achieving better trade-offs between utility and fairness while mitigating privacy risks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making sure that computer programs don’t make unfair decisions based on things like a person’s race or gender. Right now, these programs are not very good at doing this because they were trained using data that has biases built into it. The researchers propose a new way to train these programs called MAPPING, which helps them be more fair and also protects people’s private information. |
Keywords
* Artificial intelligence * Classification * Inference * Pruning