Loading Now

Summary of Improved Regret Of Linear Ensemble Sampling, by Harin Lee et al.


Improved Regret of Linear Ensemble Sampling

by Harin Lee, Min-hwan Oh

First submitted to arxiv on: 6 Nov 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A machine learning algorithm called linear ensemble sampling is improved to achieve a better regret bound. The paper proves that by using an ensemble size that grows logarithmically with time, the algorithm can reach a frequentist regret bound of (d^{3/2}), matching state-of-the-art results for randomized linear bandit algorithms. This is achieved through a general regret analysis framework for linear bandit algorithms. Additionally, the paper shows that Linear Perturbed-History Exploration (LinPHE) is a special case of linear ensemble sampling and derives a new regret bound of (d^{3/2}) for LinPHE, independent of the number of arms. This work advances the theoretical foundation of ensemble sampling, bringing its regret bounds in line with the best known bounds for other randomized exploration algorithms.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper improves a machine learning algorithm called linear ensemble sampling to make better choices. It shows that by using more data, the algorithm can make good decisions and not get too stuck on one choice. The paper also finds a connection between two similar ideas, Linear Perturbed-History Exploration (LinPHE) and linear ensemble sampling, which helps us understand both of them better.

Keywords

* Artificial intelligence  * Machine learning