Loading Now

Summary of Inline: Inner-layer Information Exchange For Multi-task Learning on Heterogeneous Graphs, by Xinyue Feng et al.


InLINE: Inner-Layer Information Exchange for Multi-task Learning on Heterogeneous Graphs

by Xinyue Feng, Jinquan Hang, Yuequn Zhang, Haotian Wang, Desheng Zhang, Guang Wang

First submitted to arxiv on: 29 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to address the issue of negative transfer in Multi-Task Learning (MTL) for heterogeneous graph modeling. MTL typically involves learning multiple tasks within a single graph, but existing methods can introduce suboptimal performance due to interference between task outputs. The authors propose Inner-Layer Information Exchange (InLINE), which enables fine-grained information exchanges within each graph layer. InLINE consists of Structure Disentangled Experts for layer-wise structure disentanglement and Structure Disentangled Gates for assigning disentangled information to different tasks. Experimental results on public datasets and an industry dataset demonstrate that InLINE effectively alleviates negative transfer, improving performance by 6.3% on the DBLP dataset and 3.6% on the industry dataset compared to state-of-the-art methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps solve a problem in machine learning where different tasks might learn from different parts of complex data. The authors want to make sure that each task learns the most important things, not getting confused by other tasks. They propose a new way to do this called Inner-Layer Information Exchange (InLINE). This method looks at what’s going on within each part of the data and makes sure each task gets the right information. Results show that InLINE works well and can even improve performance by 6-3% in some cases.

Keywords

* Artificial intelligence  * Machine learning  * Multi task