Summary of An Embedding Is Worth a Thousand Noisy Labels, by Francesco Di Salvo and Sebastian Doerrich and Ines Rieger and Christian Ledig
An Embedding is Worth a Thousand Noisy Labels
by Francesco Di Salvo, Sebastian Doerrich, Ines Rieger, Christian Ledig
First submitted to arxiv on: 26 Aug 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG); Image and Video Processing (eess.IV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to mitigate the impact of low-quality data annotations in deep neural networks. The authors argue that existing strategies are limited by their computational complexity and application dependency. They introduce WANN, a Weighted Adaptive Nearest Neighbor method that leverages self-supervised feature representations from foundation models. A reliability score is used to guide the weighted voting scheme, which outperforms reference methods on various datasets with different noise types and severities. Additionally, WANN exhibits superior generalization performance on imbalanced data compared to adaptive nearest neighbors and fixed k-nearest neighbors. The approach also enhances supervised dimensionality reduction under noisy labels, leading to improved classification performance with significantly reduced image embeddings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to make sure deep learning models work well even when the training data has mistakes. Right now, we don’t have a good solution for this problem because existing methods are too complicated or only work for certain types of data. The authors introduce a new approach called WANN that uses self-learning and a special score to decide which labels are correct. This method works better than other approaches on many different datasets with different kinds of noise. It also helps when the data is not evenly divided between classes, which is important in many applications. |
Keywords
» Artificial intelligence » Classification » Deep learning » Dimensionality reduction » Generalization » Nearest neighbor » Self supervised » Supervised