Interactive Thompson Sampling for Multi-Objective Multi-Armed Bandits Host Publication: International Conference on Algorithmic Decision Theory Authors: D. Roijers, L. Zintgraf and A. Nowé Publisher: Springer Publication Date: Oct. 2017 Number of Pages: 17 ISBN: 978-3-319-67503-9
Abstract: In multi-objective reinforcement learning (MORL), much attention is paid to generating optimal solution sets for unknown utility functions of users, based on the stochastic reward vectors only. In online MORL on the other hand, the agent will often be able to elicit preferences from the user, enabling it to learn about the utility function of its user directly. In this paper, we study online MORL with user interaction employing the multi-objective multi-armed bandit (MOMAB) setting perhaps the most fundamental MORL setting. We use Bayesian learning algorithms to learn about the environment and the user simultaneously. Specifically, we propose two algorithms: Utility-MAP UCB (umap-UCB) and Interactive Thompson Sampling (ITS), and show empirically that the performance of these algorithms in terms of regret closely approximates the regret of UCB and regular Thompson sampling provided with the ground truth utility function of the user from the start, and that ITS outperforms umap-UCB.
|