Loading Now

Summary of Deep Neural Network Models Trained with a Fixed Random Classifier Transfer Better Across Domains, by Hafiz Tiomoko Ali et al.


Deep Neural Network Models Trained With A Fixed Random Classifier Transfer Better Across Domains

by Hafiz Tiomoko Ali, Umberto Michieli, Ji Joong Moon, Daehyun Kim, Mete Ozay

First submitted to arxiv on: 28 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: The paper explores the Neural collapse phenomenon, where Deep Neural Networks converge to Equiangular Tight Frame geometry during training. Inspired by this property, the authors fix the last layer weights according to ETF and train DNN models with this fixed classifier, achieving improved transfer performance on various fine-grained image classification datasets. This approach outperforms baseline methods and explicit covariance whitening methods, demonstrating a powerful mechanism for improving domain transfer learning.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: Scientists discovered that Deep Neural Networks can get stuck in a special pattern during training, which helps them do better at recognizing things they haven’t seen before. The researchers took this idea and used it to improve how well these networks work when shown new things. They found that this way of training the networks makes them perform up to 22% better than usual on certain types of pictures. This is important because it could help computers become even better at recognizing objects, people, and more.

Keywords

* Artificial intelligence  * Image classification  * Transfer learning