Summary of Delayed Bottlenecking: Alleviating Forgetting in Pre-trained Graph Neural Networks, by Zhe Zhao et al.
Delayed Bottlenecking: Alleviating Forgetting in Pre-trained Graph Neural Networks
by Zhe Zhao, Pengkun Wang, Xu Wang, Haibin Wen, Xiaolong Xie, Zhengyang Zhou, Qingfu Zhang, Yang Wang
First submitted to arxiv on: 23 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel pre-training framework for graph neural networks (GNNs) is proposed to improve the transferability of knowledge to downstream tasks. Traditional self-supervised pre-training strategies may not extract all useful information about the downstream task, leading to forgetting phenomena that negatively impact performance. The Delayed Bottlenecking Pre-training (DBP) framework addresses this issue by suppressing compression during the pre-training phase and delaying it until the fine-tuning phase, where labeled data and downstream tasks guide the compression process. Two information control objectives are designed to optimize the DBP framework, which is evaluated on chemistry and biology datasets, demonstrating its effectiveness. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary GNNs can learn useful knowledge from large amounts of unlabeled data before being applied to specific tasks. However, this pre-training process has limitations. A new approach called Delayed Bottlenecking Pre-training (DBP) tries to solve these issues by allowing the GNN to remember more information during pre-training and then adjusting it during the fine-tuning phase where it’s actually used. This helps the GNN perform better on specific tasks. The DBP method is tested on chemistry and biology problems and shows promising results. |
Keywords
» Artificial intelligence » Fine tuning » Gnn » Self supervised » Transferability