Summary of Time-, Memory- and Parameter-efficient Visual Adaptation, by Otniel-bogdan Mercea et al.
Time-, Memory- and Parameter-Efficient Visual Adaptation
by Otniel-Bogdan Mercea, Alexey Gritsenko, Cordelia Schmid, Anurag Arnab
First submitted to arxiv on: 5 Feb 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed adaptation method efficiently finetunes foundation models for downstream tasks without requiring backpropagation through the entire model. Instead, it uses a lightweight network in parallel that operates on features from the frozen, pretrained backbone. This approach achieves state-of-the-art accuracy-parameter trade-offs on the VTAB benchmark and outperforms prior works in terms of training-time and memory usage. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Foundation models are becoming increasingly popular, but they need to be finetuned for specific tasks. Normally, this requires backpropagation through the entire model, which takes a lot of time and memory. The new method gets around this by using a small network that works alongside the big, frozen network. This makes it much faster and uses less memory. It’s even good enough to be used with very large models. |
Keywords
* Artificial intelligence * Backpropagation