Learning to generate humanoid motor behavior
In recent years, advances in reinforcement learning have made it increasingly possible to learn to solve complex tasks in a variety of domains. Our recent work mostly focuses on motor control for simulated bodies, with an aim towards building systems capable of generating flexible, adaptive behavior in rich environments. A motivating premise of this work is that it does not always make sense to learn how to perform a task from scratch, end-to-end when certain basic motor skills should, in principle, be reusable. The core research questions we seek to address have to do with how human-like motor skills can be learned and represented, as well as what architectures support transfer and reusability in new settings. Reuse may require learning how to sequence and coordinate available skills as well as integrating additional sensory information to determine what motor behaviors are relevant in a given setting. This talk will survey our progress on these topics.
A pizza lunch will be served.