Loading Now

Summary of Noisygl: a Comprehensive Benchmark For Graph Neural Networks Under Label Noise, by Zhonghao Wang et al.


NoisyGL: A Comprehensive Benchmark for Graph Neural Networks under Label Noise

by Zhonghao Wang, Danyu Sun, Sheng Zhou, Haobo Wang, Jiapei Fan, Longtao Huang, Jiajun Bu

First submitted to arxiv on: 6 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Social and Information Networks (cs.SI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Graph Neural Networks (GNNs) have shown great potential in node classification tasks, leveraging a message-passing mechanism. However, their performance often relies on high-quality node labels, which can be challenging to obtain due to unreliable sources or adversarial attacks. The study of GNNs under Label Noise (GLN) has gained traction, but the lack of a comprehensive benchmark hinders deeper understanding and development. To address this gap, we introduce NoisyGL, a first-of-its-kind benchmark for graph neural networks under label noise. NoisyGL enables fair comparisons and detailed analyses across various datasets with unified settings. Our findings will be beneficial for future studies, and our open-source library aims to foster advancements.
Low GrooveSquid.com (original content) Low Difficulty Summary
Graph Neural Networks are super powerful in classifying nodes! But they only work well if the node labels are really good. In real life, getting those labels can be tricky because of bad data or sneaky attacks. To help make GNNs better, people started looking at how they do when the labels are noisy (wrong). But there wasn’t a standard way to test this yet. So, we made a special tool called NoisyGL that lets people compare and analyze different methods for handling noisy labels on graphs. This will really help us improve GNNs in the future!

Keywords

» Artificial intelligence  » Classification