Publications Freek Stulp


Back to Homepage
Sorted by DateClassified by Publication TypeClassified by Research Category
Hierarchical Reinforcement Learning with Motion Primitives
Freek Stulp and Stefan Schaal. Hierarchical Reinforcement Learning with Motion Primitives. In 11th IEEE-RAS International Conference on Humanoid Robots, 2011.
Download
[PDF]1.7MB  
Abstract
Temporal abstraction and task decomposition drastically reduce the search space for planning and control, and are fundamental to making complex tasks amenable to learning. In the context of reinforcement learning, temporal abstractions are studied within the paradigm of hierarchical reinforcement learning. We propose a hierarchical reinforcement learning approach by applying our algorithm PI2 to sequences of Dynamic Movement Primitives. For robots, this representation has some important advantages over discrete representations in terms of scalability and convergence speed. The parameters of the Dynamic Movement Primitives are learned simultaneously at different levels of temporal abstraction. The shape of a movement primitive is optimized w.r.t. the costs up to the next primitive in the sequence, and the subgoals between two movement primitives w.r.t. the costs up to the end of the entire movement primitive sequence. We implement our approach on an 11-DOF arm and hand, and evaluate it in a pick-and-place task in which the robot transports an object between different shelves in a cupboard.
BibTeX
@InProceedings{stulp11hierarchical,
  title                    = {Hierarchical Reinforcement Learning with Motion Primitives},
  author                   = {Freek Stulp and Stefan Schaal},
  booktitle                = {11th IEEE-RAS International Conference on Humanoid Robots},
  year                     = {2011},
  abstract                 = {Temporal abstraction and task decomposition drastically reduce the search space for planning and control, and are fundamental to making complex tasks amenable to learning. In the context of reinforcement learning, temporal abstractions are studied within the paradigm of \emph{hierarchical} reinforcement learning. We propose a hierarchical reinforcement learning approach by applying our algorithm PI2 to \emph{sequences} of Dynamic Movement Primitives. For robots, this representation has some important advantages over discrete representations in terms of scalability and convergence speed. The parameters of the Dynamic Movement Primitives are learned simultaneously at different levels of temporal abstraction. The shape of a movement primitive is optimized w.r.t. the costs \emph{up to the next} primitive in the sequence, and the subgoals between two movement primitives w.r.t. the costs \emph{up to the end} of the entire movement primitive sequence. We implement our approach on an 11-DOF arm and hand, and evaluate it in a pick-and-place task in which the robot transports an object between different shelves in a cupboard.},
  bib2html_accrate         = {Oral: 15\%},
  bib2html_pubtype         = {Refereed Conference Paper},
  bib2html_rescat          = {Reinforcement Learning of Robot Skills}
}

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints.


Generated by bib2html.pl (written by Patrick Riley ) on Mon Jul 20, 2015 21:50:11