Summary of Fedrgl: Robust Federated Graph Learning For Label Noise, by De Li et al.
FedRGL: Robust Federated Graph Learning for Label Noise
by De Li, Haodong Qian, Qiyu Li, Zhou Tan, Zemin Gan, Jinyan Wang, Xianxian Li
First submitted to arxiv on: 28 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Federated Graph Learning (FGL) method, termed FedRGL, is a distributed machine learning paradigm that enables secure and collaborative modeling of local graph data among clients using graph neural networks. To address the issue of label noise degrading the global model’s generalization performance, FedRGL introduces dual-perspective consistency noise node filtering, leveraging both the global model and subgraph structure under class-aware dynamic thresholds. Additionally, graph contrastive learning is incorporated to enhance client-side training, improving encoder robustness and assigning high-confidence pseudo-labels to noisy nodes. Model quality is measured via predictive entropy of unlabeled nodes, enabling adaptive robust aggregation of the global model. Comparative experiments on multiple real-world graph datasets show that FedRGL outperforms 12 baseline methods across various noise rates, types, and numbers of clients. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated Graph Learning (FGL) is a new way for computers to work together and learn from each other’s data without sharing the data itself. This helps keep sensitive information private. But sometimes, errors can creep into this process, which can make the results less accurate. To fix this, researchers developed a new method called FedRGL that helps remove these errors and improve the results. They also added some extra steps to help computers learn better from each other’s data. |
Keywords
» Artificial intelligence » Encoder » Generalization » Machine learning