Multi-Armed Bandits with Correlated Arms

Summary

  • Proposed approach improves over bandit algorithms by leveraging reward correlations between arms obtained by prior information
  • Implemented recommendation systems on the MovieLens and Goodreads datasets with these algorithms to validate the theoretical results

Related