Loading Now

Summary of Pseudo Labelling For Enhanced Masked Autoencoders, by Srinivasa Rao Nandam et al.


Pseudo Labelling for Enhanced Masked Autoencoders

by Srinivasa Rao Nandam, Sara Atito, Zhenhua Feng, Josef Kittler, Muhammad Awais

First submitted to arxiv on: 25 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed approach enhances Masked Autoencoders (MAE) by integrating pseudo labelling for both class and data tokens. This strategy uses cluster assignments as pseudo labels to promote instance-level discrimination within the network. The targets for pseudo labelling and reconstruction are generated by a teacher network, which is decoupled into two distinct models: one serves as a labelling teacher and the other as a reconstruction teacher. This separation empirically improves performance on ImageNet-1K and downstream tasks such as classification, semantic segmentation, and detection.
Low GrooveSquid.com (original content) Low Difficulty Summary
MIM-based models like SdAE, CAE, GreenMIM, and MixAE have tried to improve MAE performance by changing prediction, loss functions, or adding new components. This paper suggests a new way to make MAE better by using pseudo labels for both class and data tokens. It also changes how the network reconstructs pixels into tokens, which helps it learn more about local context. The teacher network makes targets for pseudo labelling and reconstruction, and is split into two parts: one helps with labelling and the other with reconstruction. This works better than having just one teacher, and doesn’t slow down the model too much.

Keywords

» Artificial intelligence  » Classification  » Mae  » Semantic segmentation