Loading Now

Summary of Efficient Distribution Matching Of Representations Via Noise-injected Deep Infomax, by Ivan Butakov et al.


Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax

by Ivan Butakov, Alexander Semenenko, Alexander Tolmachev, Andrey Gladkov, Marina Munkhoeva, Alexey Frolov

First submitted to arxiv on: 9 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Information Theory (cs.IT); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel method called Deep InfoMax (DIM) has been enhanced to learn self-supervised representations that conform to a specific distribution, a task known as distribution matching (DM). DIM is based on maximization of mutual information between input and output of a deep neural network encoder. The enhancement involves injecting an independent noise into the normalized outputs while keeping the same InfoMax training objective. This modification enables learning uniformly distributed representations, normally distributed representations, or representations of other absolutely continuous distributions. The approach is tested on various downstream tasks, with results indicating a moderate trade-off between performance and quality of DM.
Low GrooveSquid.com (original content) Low Difficulty Summary
DIM has been improved to learn self-supervised representations that match a specific distribution. This method, called DIM, uses mutual information between input and output to train the encoder. To make it work for matching distributions, noise is added to the outputs while keeping the same training goal. This lets us create different types of distributed representations. The new approach works well on different tasks.

Keywords

» Artificial intelligence  » Encoder  » Neural network  » Self supervised