Summary of Heterophilous Distribution Propagation For Graph Neural Networks, by Zhuonan Zheng et al.
Heterophilous Distribution Propagation for Graph Neural Networks
by Zhuonan Zheng, Sheng Zhou, Hongjia Xu, Ming Gu, Yilun Xu, Ao Li, Yuhong Li, Jingjun Gu, Jiajun Bu
First submitted to arxiv on: 31 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Social and Information Networks (cs.SI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a new approach to Graph Neural Networks (GNNs) called Heterphilous Distribution Propagation (HDP), which addresses the limitations of existing Heterphilous GNNs (HeterGNNs). HDP adaptively separates neighbors into homophilous and heterophilous parts based on pseudo assignments during training. This novel approach combines orthogonality-oriented constraint via trusted prototype contrastive learning with semantic-aware message passing. The authors conduct extensive experiments on 9 benchmark datasets, outperforming representative baselines on heterophilous datasets. The proposed method is designed to handle graphs that violate the homophily assumption, which is critical in many real-world scenarios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper proposes a new way for computer models (called Graph Neural Networks) to understand relationships between things. Normally, these models assume that similar things are connected together, but this isn’t always true in real life. The new approach, called HDP, tries to separate the connections into two types: those where similar things are connected and those where different things are connected. This helps the model learn more effectively from graphs that don’t follow the usual rules. |