Thesis Adapting bipedal neuro-motor policies on planned footsteps
Loading...
Date
2024-04
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This study investigates a hierarchical reinforcement learning approach to achieve human-like walking in bipedal robots while following marked footsteps. Traditionally, state machines and model-based methods were used for this task, ensuring stability and safety but lacking natural human-like motion. Our approach utilizes a two-level architecture: a high-level policy trained specifically for following footsteps and a low level policy distilled from motion capture data to generate natural gaits. Experiments demonstrate that this hierarchical approach significantly outperforms training a single network, particularly for complex tasks on human-sized robots. The low-level network plays a crucial role, substantially reducing joint torques and speeds while achieving stable walking. However, a current limitation is the inability to follow footsteps on stairs. We observed that both general and locomotion motion capture datasets achieved similar results in following footsteps, but the locomotion dataset generated more visually natural human-like walking, especially for forward walking. Future work will aim to improve the robot’s walking robustness for navigating uneven terrains like stairs and slopes. Our findings suggest that low-level networks pre-trained on motion capture data are a viable approach for achieving human-like walking gaits in real-world, human-sized robots. This research paves the way for developing bipedal robots with efficient and natural walking capabilities. Accompanying videos1 and code2 are available online.
Description
Keywords
Bipedal locomotion, Deep reinforcement learning, Hierarchical networks, Human-like motion, Motion capture
Citation
Campus
Campus Casa Central Valparaíso