Loading Now

Summary of Smooth Infomax — Towards Easier Post-hoc Interpretability, by Fabian Denoodt et al.


Smooth InfoMax – Towards easier Post-Hoc interpretability

by Fabian Denoodt, Bart de Boer, José Oramas

First submitted to arxiv on: 23 Aug 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This novel method for self-supervised representation learning, called Smooth InfoMax (SIM), incorporates an interpretability constraint into learned representations at various depths of a neural network. The architecture is split into probabilistic modules, each optimized using the InfoNCE bound. Inspired by VAEs, these modules produce Gaussian distribution samples, which are further constrained to be close to the standard normal distribution. This results in a smooth and predictable space, enabling traversal of the latent space for easier post-hoc analysis. SIM’s performance is evaluated on sequential speech data, showing competitive performance with its less interpretable counterpart, Greedy InfoMax (GIM). Furthermore, insights into SIM’s internal representations demonstrate that contained information is less entangled throughout the representation and more concentrated in a smaller subset of dimensions, highlighting improved interpretability.
Low GrooveSquid.com (original content) Low Difficulty Summary
Smooth InfoMax (SIM) is a new way to learn good representations from data without labels. It makes sure the learned representations are easy to understand by adding a special constraint. The method has several parts, each of which is optimized separately using a special bound. This helps create a smooth and predictable space that’s easier to explore. We tested SIM on speech data and found it works well compared to a similar but less interpretable approach. By looking at what SIM learns, we can see that the information is more organized and easier to understand.

Keywords

» Artificial intelligence  » Latent space  » Neural network  » Representation learning  » Self supervised