Loading Now

Summary of Deep Nets with Subsampling Layers Unwittingly Discard Useful Activations at Test-time, by Chiao-an Yang et al.


Deep Nets with Subsampling Layers Unwittingly Discard Useful Activations at Test-Time

by Chiao-An Yang, Ziwei Liu, Raymond A. Yeh

First submitted to arxiv on: 1 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach in deep learning, the proposed search and aggregate method leverages discarded activation maps to enhance model performance at test time. By incorporating these maps, which are typically discarded during subsampling layers, models can improve their prediction capabilities. The method is demonstrated on image classification and semantic segmentation tasks using nine different architectures on multiple datasets, achieving consistent improvements in test-time performance. This complements existing test-time augmentation techniques and highlights the potential of utilizing discarded activations to boost model performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper shows that useful information is being thrown away in deep learning models. Instead of getting rid of it, we can use it to make our models better. We came up with a way to find this useful information and add it back into the model at test time. We tested this idea on several different types of tasks and found that it works really well. This is good news for people who want to improve their AI models.

Keywords

» Artificial intelligence  » Deep learning  » Image classification  » Semantic segmentation