Deep Q-Learning for Humanoid Walking
PublicExisting methods to allow humanoid robots to walk suffer from a lack of adaptability to new and unexpected environments, due to their reliance on using only higher-level motion control with relatively fixed sub-motions, such as taking an individual step. These conventional methods require significant knowledge of controls and assumptions about the expected surroundings. Humans, however, manage to walk very efficiently and adapt to new environments well due to the learned behaviors. Our approach is to create a reinforcement learning framework that continuously chooses an action to perform, by utilizing a neural network to rate a set of joint values based on the current state of the robot. We successfully train the Boston Dynamics Atlas robot to learn how to walk with this framework.
- This report represents the work of one or more WPI undergraduate students submitted to the faculty as evidence of completion of a degree requirement. WPI routinely publishes these reports on its website without editorial or peer review.
- Creator
- Publisher
- Identifier
- E-project-042616-142036
- Advisor
- Year
- 2016
- Date created
- 2016-04-26
- Resource type
- Major
- Rights statement
- License
Relations
- In Collection:
Items
Items
Thumbnail | Title | Visibility | Embargo Release Date | Actions |
---|---|---|---|---|
|
Deep_Q-Learning_for_Humanoid_Walking.pdf | Public | Download |
Permanent link to this page: https://digital.wpi.edu/show/3t945s25q