Loading Now

Summary of Dissecting Representation Misalignment in Contrastive Learning Via Influence Function, by Lijie Hu et al.


Dissecting Representation Misalignment in Contrastive Learning via Influence Function

by Lijie Hu, Chenyang Ren, Huanyi Xie, Khouloud Saadi, Shu Yang, Zhen Tan, Jingfeng Zhang, Di Wang

First submitted to arxiv on: 18 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Contrastive learning is a crucial technique in large-scale multimodal models, relying on diverse datasets. However, these sources often contain misaligned or mislabeled text-image pairs, leading to robustness issues and hallucinations. To address this, we propose the Extended Influence Function for Contrastive Loss (ECIF), an influence function specifically designed for contrastive learning models. ECIF considers both positive and negative samples and provides a closed-form approximation of contrastive learning models, eliminating the need for retraining. Our approach enables data evaluation, misalignment detection, and misprediction trace-back tasks with improved transparency and interpretability. By leveraging ECIF, we develop a series of algorithms that advance the performance of CLIP-style embedding models.
Low GrooveSquid.com (original content) Low Difficulty Summary
Contrastive learning is an important technique used in computer vision and natural language processing. However, it relies on large datasets which can contain errors or mislabeled information. This can lead to problems with how well the model performs. To solve this issue, we developed a new method called ECIF (Extended Influence Function for Contrastive Loss). It’s a way to measure how individual pieces of data affect the performance of the model. Our approach helps detect and correct errors in the data, making the model more accurate and reliable.

Keywords

» Artificial intelligence  » Contrastive loss  » Embedding  » Natural language processing