Personal tools
You are here: Home Events LFCS Seminars-Folder LFCS Seminar: Navin Goyal

LFCS Seminar: Navin Goyal

— filed under:

Analysis of Thompson Sampling for the Multi-armed Bandit Problem

What
  • LFCS Seminar
When Jun 28, 2012
from 04:00 PM to 05:00 PM
Where IF 4.31-33
Add event to calendar vCal
iCal


The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision making. This problem has many applications including online search advertising, clinical trials. Many algorithms are now available for this well-studied problem. In this talk, I will discuss one of the earliest algorithms, which was given by W. R. Thompson and dates back to 1933.


This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. We show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem. Our upper bounds on regret are close to the asymptotic lower bounds for this problem.  I will briefly discuss open problems and perhaps some ongoing work as well.
 
This talk is based on a joint work with Shipra Agrawal.
 

Document Actions