Loading Now

Summary of Learning Human-aligned Representations with Contrastive Learning and Generative Similarity, by Raja Marjieh et al.


Learning Human-Aligned Representations with Contrastive Learning and Generative Similarity

by Raja Marjieh, Sreejan Kumar, Declan Campbell, Liyi Zhang, Gianluca Bencomo, Jake Snell, Thomas L. Griffiths

First submitted to arxiv on: 29 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Neurons and Cognition (q-bio.NC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach is proposed to induce effective representations in machine learning models for few-shot learning and robustness. The method leverages a Bayesian notion of generative similarity to capture human cognitive representations. This is achieved by incorporating generative similarity into a contrastive learning objective, enabling the learning of embeddings that express human-like representations. The utility of this approach is demonstrated through experiments on shape regularity, abstract Euclidean geometric concepts, and semantic hierarchies for natural images.
Low GrooveSquid.com (original content) Low Difficulty Summary
Machine learning models can learn from few examples and understand complex data better if they have good representations. Researchers are working on finding ways to train these models effectively. One problem is that it’s hard to find training data that shows how humans think about certain things. This makes it difficult to develop Bayesian models of human thinking. A new method uses a concept called “generative similarity” to help solve this problem. It says two pieces of data are similar if they were likely to have come from the same source. This can be applied to complex situations and programs that generate data. The approach is tested on different tasks, such as understanding shapes and natural images.

Keywords

» Artificial intelligence  » Few shot  » Machine learning