Summary of Gradient Projection For Continual Parameter-efficient Tuning, by Jingyang Qiao and Zhizhong Zhang and Xin Tan and Yanyun Qu and Wensheng Zhang and Zhi Han and Yuan Xie
Gradient Projection For Continual Parameter-Efficient Tuning
by Jingyang Qiao, Zhizhong Zhang, Xin Tan, Yanyun Qu, Wensheng Zhang, Zhi Han, Yuan Xie
First submitted to arxiv on: 22 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a unified framework called Parameter Efficient Gradient Projection (PEGP) to reformulate existing parameter-efficient tuning methods, such as Adapter, LoRA, Prefix-tuning, and Prompt-tuning. The authors reframe these approaches from the perspective of gradient projection and introduce orthogonal gradient projection to resist forgetting in large-scale models. This modification reduces the impact on old feature spaces with minimal extra memory space and training time required. The paper evaluates PEGP extensively using different backbones (ViT and CLIP) and diverse datasets, demonstrating its efficiency in reducing forgetting in various continual learning settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps us better understand how to train large models while keeping the knowledge we already have. Right now, there’s a problem with these big models: they can forget what they learned earlier if they’re not trained carefully. The authors came up with a new way to train these models that makes them retain old knowledge and learn new things at the same time. They tested this approach on different types of data and it worked really well. This could be important for developing artificial intelligence that can keep learning and improving over time. |
Keywords
» Artificial intelligence » Continual learning » Lora » Parameter efficient » Prompt » Vit