Summary of Sample-efficient Bayesian Optimization with Transfer Learning For Heterogeneous Search Spaces, by Aryan Deshwal et al.
Sample-Efficient Bayesian Optimization with Transfer Learning for Heterogeneous Search Spaces
by Aryan Deshwal, Sait Cakmak, Yuhou Xia, David Eriksson
First submitted to arxiv on: 9 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Bayesian optimization (BO) is a powerful method for efficiently optimizing black-box functions, but it may not work well when there are few function evaluations and historical experiments with different tunable parameters (search spaces). To address this issue, we propose two methods for Bayesian optimization in heterogeneous search spaces. The first approach uses a Gaussian process model with a conditional kernel to transfer information between different search spaces. Our second approach treats the missing parameters as hyperparameters of the GP model that can be inferred jointly with other GP hyperparameters or set to fixed values. We demonstrate the effectiveness of these methods on several benchmark problems. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Bayesian optimization is a way for computers to find the best settings for something without trying every combination. It’s great, but sometimes it doesn’t work well when there are only a few chances to try different settings. That’s why scientists want to use information from past experiments to help make better choices. In this paper, two ways are proposed to do this by using a type of mathematical model called a Gaussian process. The methods are tested and shown to be effective. |
Keywords
» Artificial intelligence » Optimization