Loading Now

Summary of Tackling Dimensional Collapse Toward Comprehensive Universal Domain Adaptation, by Hung-chieh Fang et al.


Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation

by Hung-Chieh Fang, Po-Yi Lu, Hsuan-Tien Lin

First submitted to arxiv on: 15 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Universal Domain Adaptation (UniDA) approach addresses the challenge of unsupervised domain adaptation where target classes may differ arbitrarily from source ones, except for a shared subset. The partial domain matching (PDM) method aligns only shared classes but struggles in extreme cases where many source classes are absent in the target domain, underperforming the naive baseline that trains on only source data. To address this limitation, the authors identify dimensional collapse (DC) in target representations as the primary cause of PDM’s underperformance. They propose a novel approach that jointly leverages alignment and uniformity techniques from modern self-supervised learning (SSL) on unlabeled target data to preserve the intrinsic structure of learned representations. The experimental results demonstrate that SSL consistently outperforms PDM, achieving new state-of-the-art results across a broader benchmark of UniDA scenarios with different portions of shared classes.
Low GrooveSquid.com (original content) Low Difficulty Summary
Universal Domain Adaptation is trying to make machines learn from one place and apply it to another place without needing labels for both places. The problem is that the two places might not have similar things, making it hard for machines to learn. Some methods try to match what’s common between the two places, but this doesn’t work well when many things in the first place are missing in the second place. This paper figures out why this method isn’t working and proposes a new way that combines two techniques to make machines learn better from one place and apply it to another.

Keywords

» Artificial intelligence  » Alignment  » Domain adaptation  » Self supervised  » Unsupervised