Loading Now

Summary of Densenets Reloaded: Paradigm Shift Beyond Resnets and Vits, by Donghyun Kim et al.


DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs

by Donghyun Kim, Byeongho Heo, Dongyoon Han

First submitted to arxiv on: 28 Mar 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Densely Connected Convolutional Networks (DenseNets) have been overlooked in favor of ResNet-style architectures, despite their potential. Our pilot study shows that dense connections through concatenation are strong and can be revitalized to compete with modern architectures by refining suboptimal components. We achieved this by making architectural adjustments, redesigning blocks, and improving training recipes. Our models, employing simple architectural elements, surpassed Swin Transformer, ConvNeXt, and DeiT-III in residual learning lineage. They also exhibited near state-of-the-art performance on ImageNet-1K, competing with recent models and downstream tasks like ADE20k semantic segmentation and COCO object detection/instance segmentation.
Low GrooveSquid.com (original content) Low Difficulty Summary
DenseNets are a type of neural network that can learn complex patterns in images. For a long time, they were not as good as other networks called ResNet-style architectures. But we think DenseNets have been underrated and can do better with some changes. We made these changes by adjusting how the network is built, changing how it processes information, and improving how it’s trained. Our new models are simple but work well. They even beat some of the best networks in ImageNet-1K, a big competition to recognize images.

Keywords

* Artificial intelligence  * Instance segmentation  * Neural network  * Object detection  * Resnet  * Semantic segmentation  * Transformer