Loading Now

Summary of Mveb: Self-supervised Learning with Multi-view Entropy Bottleneck, by Liangjian Wen et al.


MVEB: Self-Supervised Learning with Multi-View Entropy Bottleneck

by Liangjian Wen, Xiasi Wang, Jianzhuang Liu, Zenglin Xu

First submitted to arxiv on: 28 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed multi-view entropy bottleneck (MVEB) objective learns minimal sufficient representation for self-supervised learning by maximizing both the agreement between two views and the differential entropy of the embedding distribution. This approach simplifies the learning process, allowing for effective generalization to downstream tasks. The MVEB method outperforms previous approaches, achieving top-1 accuracy of 76.9% on ImageNet with a vanilla ResNet-50 backbone on linear evaluation.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to learn representation that can be used for many different tasks. It does this by comparing two views of an image and finding the most important information that is shared between them. This helps to eliminate unimportant details and improves how well the learned representation works for other tasks. The authors test their method on the ImageNet dataset and find that it performs better than previous methods.

Keywords

» Artificial intelligence  » Embedding  » Generalization  » Resnet  » Self supervised