Loading Now

Summary of Continuous Contrastive Learning For Long-tailed Semi-supervised Recognition, by Zi-hao Zhou and Siyuan Fang and Zi-jing Zhou and Tong Wei and Yuanyu Wan and Min-ling Zhang


Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition

by Zi-Hao Zhou, Siyuan Fang, Zi-Jing Zhou, Tong Wei, Yuanyu Wan, Min-Ling Zhang

First submitted to arxiv on: 8 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed probabilistic framework unifies recent proposals in long-tailed learning by deriving the class-balanced contrastive loss through Gaussian kernel density estimation. The framework introduces a continuous contrastive learning method, CCL, which extends to unlabeled data using reliable and smoothed pseudo-labels. By progressively estimating the underlying label distribution and optimizing its alignment with model predictions, the approach tackles diverse distributions of unlabeled data in real-world scenarios. CCL consistently outperforms prior state-of-the-art methods, achieving over 4% improvement on the ImageNet-127 dataset. The framework’s effectiveness is demonstrated across multiple datasets with varying unlabeled data distributions.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new way to learn about things that have different amounts of information labeled. Right now, the best ways to do this rely on high-quality fake labels for lots of data without labels. However, these methods often don’t work well in real-world situations because the data is usually different from what they were trained on. The researchers came up with a new idea called CCL (Continuous Contrastive Learning). This method uses a special kind of learning that helps it understand how to make good predictions even when there’s lots of unlabeled data. They tested their approach on many datasets and showed that it works better than other methods, especially on the ImageNet-127 dataset.

Keywords

» Artificial intelligence  » Alignment  » Contrastive loss  » Density estimation