Loading Now

Summary of Online Feature Updates Improve Online (generalized) Label Shift Adaptation, by Ruihan Wu and Siddhartha Datta and Yi Su and Dheeraj Baby and Yu-xiang Wang and Kilian Q. Weinberger


Online Feature Updates Improve Online (Generalized) Label Shift Adaptation

by Ruihan Wu, Siddhartha Datta, Yi Su, Dheeraj Baby, Yu-Xiang Wang, Kilian Q. Weinberger

First submitted to arxiv on: 5 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper addresses the issue of label shift in an online setting with missing labels, where data distributions change over time and obtaining timely labels is challenging. The authors explore the potential of enhancing feature representations using unlabeled data at test-time, rather than adjusting or updating the final layer of a pre-trained classifier. They propose a novel method called Online Label Shift adaptation with Online Feature Updates (OLS-OFU), which leverages self-supervised learning to refine the feature extraction process and improve the prediction model. The authors theoretically demonstrate that OLS-OFU maintains similar online regret convergence as existing methods, while empirically achieving substantial improvements over existing approaches.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper solves a problem where data changes over time but labels are hard to get. Instead of just updating the final layer of a pre-trained model, the researchers improve how features are learned from data at test-time. They create a new method that uses self-supervised learning to make these features better and improve predictions. This helps with understanding changes in data distribution.

Keywords

* Artificial intelligence  * Feature extraction  * Self supervised