Loading Now

Summary of Cross Entropy Versus Label Smoothing: a Neural Collapse Perspective, by Li Guo et al.


Cross Entropy versus Label Smoothing: A Neural Collapse Perspective

by Li Guo, Keith Ross, Zifan Zhao, George Andriopoulos, Shuyang Ling, Yufeng Xu, Zixuan Dong

First submitted to arxiv on: 6 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the relationship between Label Smoothing Loss (LSL) and Neural Collapse (NC), a theoretical framework that characterizes deep neural network behavior during terminal training. The study reveals that LSL-trained models converge faster to NC solutions, achieving stronger levels of NC1 and intensified NC2 at the same level. These findings provide valuable insights into performance benefits and enhanced model calibration under LSL. Additionally, the paper derives closed-form solutions for the global minimizers of both loss functions using the unconstrained feature model, demonstrating that LSL models have a lower conditioning number and theoretically converge faster. This research demonstrates how NC can be used to improve understanding of deep neural networks, providing nuanced insights into differences between LSL and cross-entropy losses.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how Label Smoothing Loss affects deep learning. Neural Collapse is a way to understand what happens when models are trained for a long time. The research finds that models using Label Smoothing Loss learn faster and do better than those using other methods. This helps us understand why some models perform better than others. The paper also shows that these models are more consistent in their results, which is important for real-world applications.

Keywords

* Artificial intelligence  * Cross entropy  * Deep learning  * Neural network