Loading Now

Summary of Multi-source Domain Adaptation For Object Detection with Prototype-based Mean-teacher, by Atif Belal et al.


Multi-Source Domain Adaptation for Object Detection with Prototype-based Mean-teacher

by Atif Belal, Akhil Meethal, Francisco Perdigon Romero, Marco Pedersoli, Eric Granger

First submitted to arxiv on: 26 Sep 2023

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel method for adapting visual object detectors to operational target domains, called Prototype-based Mean Teacher (PMT). The authors focus on the problem of multi-source domain adaptation (MSDA), where labeled datasets come from multiple source domains. Existing MSDA methods learn domain-invariant and domain-specific parameters, which can lead to increased memory usage and overfitting. PMT instead uses class prototypes to encode domain-specific information, learned using a contrastive loss. This approach allows for improved accuracy and robustness without increasing the number of parameters required. The paper demonstrates that PMT outperforms state-of-the-art MSDA methods on several challenging object detection datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
PMT is a new way to help computers recognize objects in different situations. Right now, we have a problem where computers are trained to recognize things in one place, but they don’t do well when we try to use them somewhere else. The solution is called multi-source domain adaptation (MSDA), and it’s like teaching the computer to recognize what’s important about each new situation. PMT is better than other MSDA methods because it doesn’t need more memory or get too complicated, even when there are many different situations.

Keywords

* Artificial intelligence  * Contrastive loss  * Domain adaptation  * Object detection  * Overfitting