Summary of Otter: Effortless Label Distribution Adaptation Of Zero-shot Models, by Changho Shin et al.
OTTER: Effortless Label Distribution Adaptation of Zero-shot Models
by Changho Shin, Jitian Zhao, Sonia Cromp, Harit Vishwakarma, Frederic Sala
First submitted to arxiv on: 12 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a lightweight approach to adjust the predictions made by pre-trained models in zero-shot settings. The issue addressed is mismatched label distribution caused by unbalanced web-scale pre-training data, which can significantly harm performance in downstream tasks. To sidestep existing approaches that require labeled data or knowledge of the true label balance, the authors introduce an optimal transport-based method that only needs an estimate of the label distribution of a downstream task. Theoretical bounds on error are provided under certain mild conditions. Empirically, the approach improves accuracy by 4.8% and 15.9% on average in zero-shot image and text classification tasks, outperforming baselines in 17 out of 21 datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps fix a problem with popular artificial intelligence models that can’t solve new tasks without any training data. The issue is caused by the way these models were trained on a huge amount of online data, which is not balanced or fair. This can make it difficult for the model to work well in new situations. To solve this problem, the authors developed a simple and efficient approach that adjusts how the model makes predictions. This method only needs some basic information about the new task it’s trying to solve. The authors tested their approach on many different tasks and showed that it can significantly improve performance. |
Keywords
» Artificial intelligence » Text classification » Zero shot