Robust speaker localization for real-world robots This publication appears in: Computer Speech & Language Authors: G. Athanasopoulos, W. Verhelst and H. Sahli Volume: 34 Issue: 1 Pages: 129-153 Publication Date: Apr. 2015
Abstract: Autonomous humanrobot interaction ultimately requires an artificial audition module that allows the robot to process and interpret a combination of verbal and non-verbal auditory inputs. A key component of such a module is the acoustic localization. The acoustic localization not only enables the robot to simultaneously localize multiple persons and auditory events of interest in the environment, but also provides input to auditory tasks such as speech enhancement and speech recognition. The use of microphone arrays in robots is an efficient and commonly applied approach to the localization problem. In this paper, moving away from simulated environments, we look at the acoustic localization under real-world conditions and limitations. Our approach proposes a series of enhancements, taking into account the imperfect frequency response of the array microphones and addressing the influence of the robot's shape and surface material. Motivated by the importance of the signal's phase information, we introduce a novel pre-processing step for enhancing the acoustic localization. Results show that the proposed approach improves the localization performance in joint noisy and reverberant conditions and allows a humanoid robot to locate multiple speakers in a real-world environment.
|