Loading Now

Summary of The Unreasonable Effectiveness Of Early Discarding After One Epoch in Neural Network Hyperparameter Optimization, by Romain Egele et al.


The Unreasonable Effectiveness Of Early Discarding After One Epoch In Neural Network Hyperparameter Optimization

by Romain Egele, Felix Mohr, Tom Viering, Prasanna Balaprakash

First submitted to arxiv on: 5 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel hyperparameter optimization technique for deep learning is presented, addressing the long-standing trade-off between discarding unpromising candidates early on and potential losses in predictive performance. The study investigates popular discarding methods such as successive halving and learning curve extrapolation, finding that these techniques offer minimal added value compared to a simple strategy of discarding after a constant number of epochs. This approach, dubbed i-Epoch, relies heavily on the available compute budget and suggests reevaluating early discarding techniques by comparing their Pareto-Fronts with that of i-Epoch.
Low GrooveSquid.com (original content) Low Difficulty Summary
Deep learning models are really good at doing things like recognizing images or understanding speech. But to make them work well, we need to adjust many settings beforehand. This is called hyperparameter optimization (HPO). It’s a bit like trying different strengths of medicine until you find the one that works best. The problem is that HPO can take a long time because we have to test each combination of settings separately. To speed things up, people use techniques that stop looking at settings that are clearly not going to work well. But it turns out that these techniques don’t really help much and we should just pick a simple number of tries (called epochs) based on how much computer power we have available.

Keywords

* Artificial intelligence  * Deep learning  * Hyperparameter  * Optimization