Loading Now

Summary of Asyco: An Asymmetric Dual-task Co-training Model For Partial-label Learning, by Beibei Li et al.


AsyCo: An Asymmetric Dual-task Co-training Model for Partial-label Learning

by Beibei Li, Yiyuan Zheng, Beihong Jin, Tao Xiang, Haobo Wang, Lei Feng

First submitted to arxiv on: 21 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed asymmetric dual-task co-training PLL model, AsyCo, addresses the error accumulation problem in self-training PLL models by forcing two networks to learn from different views explicitly. The disambiguation network is trained with self-training PLL task to learn label confidence, while the auxiliary network is trained in a supervised learning paradigm to learn from noisy pairwise similarity labels constructed according to learned label confidence. Information distillation and confidence refinement are used to mitigate the error accumulation problem. AsyCo achieves state-of-the-art performance on both uniform and instance-dependent partially labeled datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
AsyCo is a new way to improve Partial-Label Learning (PLL) models. These models learn from data with some labels missing, but not all. Sometimes, these models get stuck because they make mistakes that then affect how well they work. AsyCo helps by having two networks that learn different things and can correct each other’s mistakes. This makes the model better at learning from partially labeled data.

Keywords

» Artificial intelligence  » Distillation  » Self training  » Supervised