Summary of Robust Bayesian Optimization Via Localized Online Conformal Prediction, by Dongwon Kim et al.
Robust Bayesian Optimization via Localized Online Conformal Prediction
by Dongwon Kim, Matteo Zecchin, Sangwoo Park, Joonhyuk Kang, Osvaldo Simeone
First submitted to arxiv on: 26 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Signal Processing (eess.SP)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A Bayesian optimization algorithm, known as localized online conformal prediction-based Bayesian optimization (LOCBO), is introduced to address the issue of model misspecification in sequential optimization. LOCBO calibrates Gaussian process models using predictive sets and denoises the likelihood based on input-dependent calibration thresholds. This approach provides theoretical performance guarantees for iterates that hold for unobserved objective functions. Experiments on synthetic and real-world tasks demonstrate LOCBO’s superiority over state-of-the-art BO algorithms in scenarios with model misspecification. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary LOCBO is a new way to optimize things by making sure our guesses are good enough. We use special math called Gaussian processes to make predictions, but sometimes these predictions can be wrong. To fix this, we add a check to see how sure we should be about each prediction. This helps us get better results even when our guesses aren’t perfect. We tested LOCBO on lots of different problems and it did really well compared to other ways of optimizing things. |
Keywords
* Artificial intelligence * Likelihood * Optimization