Summary of Uncertainty-guided Open-set Source-free Unsupervised Domain Adaptation with Target-private Class Segregation, by Mattia Litrico et al.
Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation
by Mattia Litrico, Davide Talon, Sebastiano Battiato, Alessio Del Bue, Mario Valerio Giuffrida, Pietro Morerio
First submitted to arxiv on: 16 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a novel approach for Source-Free Open-set Domain Adaptation (SF-OSDA), which relaxes the traditional assumptions in standard unsupervised domain adaptation. In SF-OSDA, both labeled source and unlabeled target data are not available simultaneously, and the labels spaces differ between domains. The proposed method, called segregating unknown classes with uncertainty-based sample selection, starts from an initial clustering-based assignment and refines pseudo-labels through an uncertainty-guided process. Additionally, a novel contrastive loss named NL-InfoNCELoss is introduced to enhance model robustness to noisy pseudo-labels. Experimental results on benchmark datasets demonstrate the superiority of this method over existing approaches, achieving state-of-the-art performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper solves a big problem in machine learning called source-free open-set domain adaptation. This means we want to teach a computer to recognize things it has never seen before without having any labeled examples from those new things. The traditional way to do this doesn’t work well when the old and new things look very different. The authors came up with a new idea that works better by grouping similar new things together and then refining what we think they are. They also created a special trick called NL-InfoNCELoss to help the computer learn from its mistakes. This worked really well in tests, which is exciting because it could help computers discover new things on their own. |
Keywords
» Artificial intelligence » Clustering » Contrastive loss » Domain adaptation » Machine learning » Unsupervised