Loading Now

Summary of Learning to Discover Knowledge: a Weakly-supervised Partial Domain Adaptation Approach, by Mengcheng Lan et al.


Learning to Discover Knowledge: A Weakly-Supervised Partial Domain Adaptation Approach

by Mengcheng Lan, Min Meng, Jun Yu, Jigang Wu

First submitted to arxiv on: 20 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper addresses the problem of weakly-supervised partial domain adaptation (WS-PDA), which involves transferring a classifier from a large source domain with noisy labels to a small unlabeled target domain. The key challenges are discovering knowledge from both domains and adapting it across domains. To tackle this, the authors propose a self-paced transfer classifier learning (SP-TCL) approach that learns to discover faithful knowledge via a prudent loss function and adapts the learned knowledge to the target domain through iterative exclusion of source examples. This is established upon a self-paced learning scheme, seeking a preferable classifier for the target domain. The approach outperforms state-of-the-art methods on several benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how to adapt a classifier from a large noisy labeled source domain to an unlabeled target domain. It proposes a new method called SP-TCL that combines self-paced learning and transfer learning to achieve good results. This helps solve the problem of adapting a model to a new domain when there is limited labeled data available.

Keywords

» Artificial intelligence  » Domain adaptation  » Loss function  » Supervised  » Transfer learning