Summary of Gta: Guided Transfer Of Spatial Attention From Object-centric Representations, by Seokhyun Seo et al.
GTA: Guided Transfer of Spatial Attention from Object-Centric Representations
by SeokHyun Seo, Jinwoo Hong, JungWoo Chae, Kyungyul Kim, Sangheum Hwang
First submitted to arxiv on: 5 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper explores the limitations of utilizing pre-trained representations in Vision Transformers (ViT) for transfer learning. While these models often perform well, they can easily overfit to a limited training dataset and lose their valuable properties. The authors investigate this phenomenon using attention maps in ViT and observe that rich representations deteriorate when trained on small datasets. To address this issue, the paper proposes a novel regularization method called Guided Transfer of spatial Attention (GTA), which regularizes self-attention maps between source and target models. Experimental results show that GTA consistently improves accuracy across five benchmark datasets, particularly when training data is limited. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research looks at how well-trained pictures can help a new model learn faster and better. Sometimes, even with good starting points, the new model can forget what it learned from the previous training. This happens more often in Vision Transformers (ViT) because they don’t have as much built-in knowledge to start with. The authors studied this problem using special maps that show which parts of the pictures are important and found that the good representations get worse when trained on small amounts of data. To fix this, they came up with a new way to help the model remember what it learned earlier. This method, called Guided Transfer of spatial Attention (GTA), helps the model focus on the right things. The results show that GTA makes the model more accurate and better at learning from small datasets. |
Keywords
* Artificial intelligence * Attention * Regularization * Self attention * Transfer learning * Vit