DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills

The adoption of physically simulated character animation in the industry remains a challenging problem, primarily because of the lack of the directability and generalizability of existing methods. With the goal being to amalgamate data-driven behavior specification with the ability to reproduce such behavior in a physical simulation, there have been several categories of approaches which try to achieve it. Kinematic models rely on large amounts of data, and their ability to generalize to unseen situations can be limiting. Physics-based models incorporate prior knowledge based on the physics of motion, but they do not perform well for “dynamic motions” involving long-term planning. Motion imitation approaches can achieve highly dynamic motions, but are limited by the complexity of the system and lack of adaptability to task objectives. Techniques based on reinforcement learning (RL), although comparatively successful in achieving the defined objectives, often produce unrealistic motion artifacts. This paper addresses these problems by proposing a deep RL-based “framework for physics-based character animation” called DeepMimic by combining a motion-imitation objective with a task objective, allowing it to demonstrate a wide range of motions skills and to adapt to a variety of characters, skills, and tasks by leveraging rich information from the high-dimensional state and environment descriptions, is conceptually simpler than motion imitation based approaches, and can work with data provided in the form of either motion capture clips or keyframed animation. While the paper presents intricate details about the DeepMimic framework, the high level details and novel contribution claims are summarized here, skipping the common details about deep RL problem formulations. ...

October 26, 2020 · 5 min · Kumar Abhishek