Learning multi-agent state space representations Host Publication: proceedings of the 8th European Workshop on Multi-Agent Systems Authors: Y. De Hauwere, P. Vrancx and A. Nowé Publication Date: Dec. 2010 Number of Pages: 15
Abstract: This paper describes an algorithm, called CQ-learning, which learns to adapt the state representation for multi-agent systems in order to coordinate with other agents. We propose a multi-level approach which builds a progressively more advanced representation of the learning problem. The idea is that agents start with a minimal single agent state space representation, which is expanded only when necessary. In cases where agents detect conflicts, they automatically expand their state to explicitly take into account the other agents. These conflict situations are then analyzed in an attempt to find an abstract representation which generalises over the problem states. Our system allows agents to learn effective policies, while avoiding the exponential state space growth typical in multi-agent environments. Furthermore, the method we introduce to generalise over conflict states allows knowledge to be transferred to unseen and possibly more complex situations. Our research departs from previous efforts in this area of multi-agent learning because our agents combine state space generalisation with an agent-
centric point of view. The algorithms that we introduce can be used in robotic systems to automatically reduce the sensor information to what is essential to solve the problem at hand. This is a must when dealing with multiple agents, since learning in such environments is a cumbersome task due to the massive amount of information, much of which may be irrelevant. In our experiments we demonstrate a simulation of such environments using various gridworlds.
|