Risk-Sensitivity Through Multi-Objective Reinforcement Learning Host Publication: 2015 IEEE Congress of Evolutionary Computation Authors: K. Marguerite Van Moffaert, T. Brys and A. Nowé UsePubPlace: Japan Publisher: IEEE Publication Date: May. 2015 Number of Pages: 8 ISBN: 9781479914869
Abstract: Usually in reinforcement learning, the goal of the agent is to maximize the expected return. However, in practical applications, algorithms that solely focus on maximizing the mean return could be inappropriate as they do not account for the variability of their solutions. Thereby, a variability measure could be included to accommodate for a risk-sensitive setting, i.e. where the system engineer can explicitly define the tolerated level of variance. Our approach is based on multi-objectivization where a standard single-objective environment is extended with one (or more) additional objectives. More precisely, we augment the standard feedback signal of an environment with an additional objective that defines the variance of the solution. We highlight that our algorithm, named risk-sensitive Pareto Q- learning, is (1) specifically tailored to learn a set of Pareto non-dominated policies that trade-off these two objectives. Additionally (2), the algorithm can also retrieve every policy that has been learned throughout the state-action space. This in contrast to standard risk-sensitive approaches where only a single trade-off between mean and variance is learned at a time.
|