Loading Now

Summary of Learn From the Learnt: Source-free Active Domain Adaptation Via Contrastive Sampling and Visual Persistence, by Mengyao Lyu et al.


Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence

by Mengyao Lyu, Tianxiang Hao, Xinhao Xu, Hui Chen, Zijia Lin, Jungong Han, Guiguang Ding

First submitted to arxiv on: 26 Jul 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates a practical Domain Adaptation (DA) paradigm called Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation, and only a limited annotation budget is available in the target domain. The authors present learn from the learnt (LFTL), a novel approach for SFADA that leverages knowledge from pre-trained models without extra overhead. They propose Contrastive Active Sampling to select informative target samples and Visual Persistence-guided Adaptation to facilitate feature distribution alignment. Extensive experiments on three benchmarks demonstrate state-of-the-art performance, superior computational efficiency, and continuous improvements as the annotation budget increases.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at a way to help machines learn from one place (the source domain) and apply what they’ve learned to another related area (the target domain), without needing all the original data. It’s called Source data-Free Active Domain Adaptation (SFADA). The researchers came up with a new idea, “learn from the learnt” (LFTL), which uses knowledge from pre-trained models to help machines learn even better. They also created a special way to pick the most helpful samples in the target domain and a method to make sure the machine is learning well. By testing their ideas on three different areas, they showed that it works really well and can do things faster than before.

Keywords

* Artificial intelligence  * Alignment  * Domain adaptation