Summary of Epsilon-greedy Thompson Sampling to Bayesian Optimization, by Bach Do and Taiwo Adebiyi and Ruda Zhang
Epsilon-Greedy Thompson Sampling to Bayesian Optimization
by Bach Do, Taiwo Adebiyi, Ruda Zhang
First submitted to arxiv on: 1 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Optimization and Control (math.OC); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed method combines Thompson sampling (TS) with the ε-greedy policy to improve exploitation in Bayesian optimization. TS is a popular solution for handling the exploration-exploitation trade-off in BO, but it only weakly manages exploitation. The new approach incorporates the ε-greedy policy, which randomly switches between two extremes of TS: generic TS and sample-average TS. These extremes prioritize exploration or exploitation, respectively. By tuning the ε parameter, the method balances exploration and exploitation, leading to improved performance on benchmark functions and a steel cantilever beam inverse problem. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Thompson sampling is a powerful tool for solving optimization problems. It helps by exploring new options and balancing that with getting information about what really works. The method was improved by adding an ε-greedy policy, which makes it switch between two ways of doing things: one that looks at lots of possibilities (exploration) and another that focuses on the best option so far (exploitation). This helps get a better balance between trying new things and sticking with what works. The method was tested and did well on some benchmark problems. |
Keywords
* Artificial intelligence * Optimization