Loading Now

Summary of When to Grow? a Fitting Risk-aware Policy For Layer Growing in Deep Neural Networks, by Haihang Wu et al.


When To Grow? A Fitting Risk-Aware Policy for Layer Growing in Deep Neural Networks

by Haihang Wu, Wei Wang, Tamasha Malepathirana, Damith Senanayake, Denny Oetomo, Saman Halgamuge

First submitted to arxiv on: 6 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The research paper investigates the process of neural growth, a technique used to accelerate deep neural network training. The study reveals that optimal growth timing is crucial and that neural growth inherently exhibits a regularization effect that affects model accuracy. However, this regularization effect may also lead to underfitting risks if not addressed. To mitigate these risks, the authors propose an under/over fitting risk-aware growth timing policy that adjusts growth timing based on potential under/overfitting risks. The proposed method is tested using CIFAR-10/100 and ImageNet datasets, achieving accuracy improvements of up to 1.3% in models prone to underfitting while maintaining similar accuracies in models suffering from overfitting compared to existing methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
Neural growth is a way to make neural networks bigger and faster. One important part of this process is figuring out when to grow the network. This study shows that growing the network can actually help prevent it from getting too good at fitting the training data, but it can also cause the model to not fit the data well enough. The authors came up with a new way to decide when to grow the network based on how likely it is to overfit or underfit. They tested this method using pictures of animals and objects, and found that it worked better than other methods in some cases.

Keywords

* Artificial intelligence  * Neural network  * Overfitting  * Regularization  * Underfitting