Loading Now

Summary of Large Convolutional Model Tuning Via Filter Subspace, by Wei Chen et al.


Large Convolutional Model Tuning via Filter Subspace

by Wei Chen, Zichen Miao, Qiang Qiu

First submitted to arxiv on: 1 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The authors propose an efficient fine-tuning method for adapting large pre-trained models to downstream tasks, which adjusts only the filter atoms responsible for spatial-only convolution while preserving knowledge in atom coefficients. This approach brings a new filter subspace view for model tuning, allowing for recursive decomposition of each filter atom into another set of atoms. By adapting only a small number of parameters, the method is highly parameter-efficient and effectively preserves the capabilities of pre-trained models. Extensive experiments show that this simple scheme surpasses previous tuning baselines for both discriminative and generative tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about making machines learn new things quickly without using too much computer power or storage space. It’s like when you’re trying to teach a dog new tricks – you don’t want to overwhelm them with too many commands at once. The researchers came up with a clever way to adapt big pre-trained models to specific tasks by adjusting only the parts that are responsible for processing visual information, while keeping the rest of the model unchanged. This approach is very efficient and helps prevent the model from becoming too specialized to one task. The results show that this method works better than previous methods for both learning new things and creating new data.

Keywords

* Artificial intelligence  * Fine tuning  * Parameter efficient