Summary of Representation Surgery For Multi-task Model Merging, by Enneng Yang and Li Shen and Zhenyi Wang and Guibing Guo and Xiaojun Chen and Xingwei Wang and Dacheng Tao
Representation Surgery for Multi-Task Model Merging
by Enneng Yang, Li Shen, Zhenyi Wang, Guibing Guo, Xiaojun Chen, Xingwei Wang, Dacheng Tao
First submitted to arxiv on: 5 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a solution called “Surgery” to address the representation bias issue in multi-task learning (MTL). Recent work has expanded MTL’s application scenarios by directly merging multiple independently trained models, but this approach often suffers from poor performance due to representation bias. The proposed Surgery module is a lightweight task-specific module that takes the merged model’s representation as input and attempts to output the biases contained in it. The module is updated through an unsupervised optimization objective that minimizes the distance between the merged model’s representation and individual models’ representations. Experimental results demonstrate significant performance improvements when applying the Surgery module to state-of-the-art model merging schemes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper wants to make multi-task learning (MTL) work better. Right now, MTL is good at doing many tasks together, but it can get stuck because of a problem called representation bias. The authors found that when they merged multiple models together, the new model didn’t look like the individual models anymore, which made it perform poorly. To fix this, they created a special module called “Surgery” that helps correct this issue. Surgery is a small part that takes the merged model’s information and tries to get rid of any biases. The authors tested their idea and found that it makes MTL work much better. |
Keywords
* Artificial intelligence * Multi task * Optimization * Unsupervised