Distilling Deep Reinforcement Learning Policies in Soft Decision Trees Host Publication: Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence Authors: Y. Coppens, K. Efthymiadis, T. Lenaerts and A. Nowé Publication Date: Aug. 2019 Number of Pages: 6
Abstract: An important step in Reinforcement Learning (RL) research is to create mechanisms that give higher level insights into the black-box policy models used nowadays and provide explanations for these learned behaviors or motivate the choices behind certain decision steps. In this paper, we illustrate how Soft Decision Tree (SDT) distillation can be used to make policies that are learned through RL more interpretable. Soft Decision Trees create binary trees of predetermined depth, where each branching node represents a hierarchical filter that influences the classification of input data. We distill SDTs from a deep neural network RL policy for the Mario AI benchmark and inspect the learned hierarchy of filters, showing which input features lead to specific action distributions in the episode. We realize preliminary steps towards interpreting the learned behavior of the policy and discuss future improvements.
|