Loading Now

Summary of Sgw-based Multi-task Learning in Vision Tasks, by Ruiyuan Zhang et al.


SGW-based Multi-Task Learning in Vision Tasks

by Ruiyuan Zhang, Yuyao Chen, Yuchi Huo, Jiaxiang Liu, Dianbing Xi, Jie Liu, Chao Wu

First submitted to arxiv on: 3 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the challenge of knowledge sharing in multi-task learning (MTL) as neural networks tackle increasingly complex tasks and large-scale datasets. Current cross-attention MTL methods struggle with noise, which hinders their performance. To overcome this issue, the authors propose an information bottleneck knowledge extraction module (KEM) that constrains the flow of information to reduce inter-task interference and computational complexity. The KEM is stabilized through neural collapse by projecting features into ETF space before inputting them into the module. Comparative experiments on multiple datasets demonstrate that the proposed approach outperforms existing methods in MTL tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us better understand how computers can learn many things at once, like recognizing objects and understanding language. Right now, computers have trouble sharing information between these different tasks. The authors of this paper think they can improve this by creating a new module that controls the flow of information. This makes it easier for computers to focus on one task without getting confused by other tasks. They tested their idea on many datasets and found it worked better than current methods.

Keywords

» Artificial intelligence  » Cross attention  » Multi task