Summary of Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal, by Juliusz Ziomek et al.
Bayesian Optimisation with Unknown Hyperparameters: Regret Bounds Logarithmically Closer to Optimal
by Juliusz Ziomek, Masaki Adachi, Michael A. Osborne
First submitted to arxiv on: 14 Oct 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers address the limitation of Bayesian Optimization (BO) algorithms in optimizing black-box functions by introducing a novel approach called Length Scale Balancing (LB). BO requires specifying the length scale hyperparameter, which defines the smoothness of the functions considered. Most current BO algorithms maximize the marginal likelihood of observed data to choose this hyperparameter, risking misspecification if the objective function is less smooth in unexplored regions. The A-GP-UCB algorithm proposed by Berkenkamp et al. (2019) progressively decreases the length scale but lacks a stopping mechanism, leading to over-exploration and slow convergence. LB aggregates multiple base surrogate models with varying length scales, balancing exploration and exploitation. The authors formally derive a cumulative regret bound for LB and compare it with the regret of an oracle BO algorithm using the optimal length scale. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new way to optimize functions called Bayesian Optimization (BO). BO helps us find the best solution by trying many different options and seeing which ones work best. But there’s a problem: we have to choose how “smooth” the solutions can be, and most current methods do this by looking at how well they fit what we’ve seen so far. This can lead to mistakes if the best solution is not very smooth in some areas. The authors of this paper propose a new method called Length Scale Balancing (LB) that helps us choose the right level of smoothness while still finding the best solution. |
Keywords
» Artificial intelligence » Hyperparameter » Likelihood » Objective function » Optimization