LFCS Seminar: Navin Goyal
Analysis of Thompson Sampling for the Multiarmed Bandit Problem
What 


When 
Jun 28, 2012 from 04:00 PM to 05:00 PM 
Where  IF 4.3133 
Add event to calendar 
vCal iCal 
The multiarmed bandit problem is a popular model for studying exploration/exploitation tradeoff in sequential decision making. This problem has many applications including online search advertising, clinical trials. Many algorithms are now available for this wellstudied problem. In this talk, I will discuss one of the earliest algorithms, which was given by W. R. Thompson and dates back to 1933.
This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. We show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multiarmed bandit problem. Our upper bounds on regret are close to the asymptotic lower bounds for this problem. I will briefly discuss open problems and perhaps some ongoing work as well.
This talk is based on a joint work with Shipra Agrawal.