Loading Now

Summary of Low-rank Matrix Bandits with Heavy-tailed Rewards, by Yue Kang et al.


Low-rank Matrix Bandits with Heavy-tailed Rewards

by Yue Kang, Cho-Jui Hsieh, Thomas C. M. Lee

First submitted to arxiv on: 26 Apr 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles a new problem in stochastic low-rank matrix bandit, where rewards have finite moments rather than being sub-Gaussian. The authors propose LOTUS, an algorithm that achieves a regret bound similar to state-of-the-art methods under sub-Gaussian noises, but without knowing the number of samples T. The regret bound is of order Õ(d(3/2)r(1/2)T^(1/(1+δ))/˜Drr). The authors also establish a lower bound for this problem and demonstrate the practical superiority of LOTUS through simulations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper solves a new problem in matrix bandit, where rewards have heavy-tailed distributions. The authors develop an algorithm called LOTUS that does well even when we don’t know how many samples there are (T). The algorithm is good at balancing exploration and exploitation to find the best arm. The results show that LOTUS performs better than other algorithms in this problem.

Keywords

» Artificial intelligence