Loading Now

Summary of Generalized Linear Bandits with Limited Adaptivity, by Ayush Sawarni et al.


Generalized Linear Bandits with Limited Adaptivity

by Ayush Sawarni, Nirjhar Das, Siddharth Barman, Gaurav Sinha

First submitted to arxiv on: 10 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents two algorithms, B-GLinCB and RS-GLinCB, designed to address limited adaptivity settings in the context of generalized linear contextual bandit problems. The first algorithm, B-GLinCB, determines upfront M rounds for policy updates, while the second, RS-GLinCB, adaptsively performs M policy updates during its course. For the first setting, B-GLinCB incurs O(sqrt(T)) regret when M = Ω(log log T) and arm feature vectors are stochastically generated. In the second setting, RS-GLinCB updates its policy O(log^2 T) times, achieving O(sqrt(T)) regret even with adversarially generated arm feature vectors. Notably, these bounds eliminate dependence on a key instance-dependent parameter κ capturing non-linearity of the underlying reward model. This novel approach to removing this dependence for generalized linear contextual bandits may be of independent interest.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper develops two algorithms, B-GLinCB and RS-GLinCB, which address limited adaptivity settings in generalized linear contextual bandit problems. The first algorithm chooses upfront how many times it will update its policy, while the second can adjust this as needed during its run. One algorithm works well when there are a lot of rounds and arm feature vectors are random. The other algorithm does well even if arm feature vectors are chosen to make things harder. Both algorithms do better than previously possible without needing to know certain details about the problem.

Keywords

* Artificial intelligence