Loading Now

Summary of Noisy Node Classification by Bi-level Optimization Based Multi-teacher Distillation, By Yujing Liu et al.


Noisy Node Classification by Bi-level Optimization based Multi-teacher Distillation

by Yujing Liu, Zongqian Wu, Zhengyu Lu, Ci Nie, Guoqiu Wen, Ping Hu, Xiaofeng Zhu

First submitted to arxiv on: 27 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel multi-teacher distillation approach, BO-NNC, is proposed for noisy node classification in graph data, tackling the common assumption of clean labels in previous graph neural networks. The method employs multiple self-supervised learning methods to train diverse teacher models, which are then aggregated through a teacher weight matrix. A bi-level optimization strategy dynamically adjusts the teacher weight matrix based on student model training progress, and a label improvement module improves label quality. The approach is evaluated on real datasets, achieving state-of-the-art results.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to help computers learn from messy data! Imagine you’re trying to sort a bunch of friends into different groups based on how similar they are. But instead of clear labels, the friends have weird and confusing characteristics. This paper proposes a solution to this problem by training multiple “teachers” to help a student model make better decisions. The teachers are all trained in different ways and then combined to give the best answer. The method is tested on real data and performs better than other approaches.

Keywords

» Artificial intelligence  » Classification  » Distillation  » Optimization  » Self supervised  » Student model