Publications Freek Stulp


Back to Homepage
Sorted by DateClassified by Publication TypeClassified by Research Category
Model-free Reinforcement Learning of Impedance Control in Stochastic Environments
Stulp, F., Buchli, B., Ellmer, A., Mistry, M., Theodorou, E., and Schaal, S.. Model-free Reinforcement Learning of Impedance Control in Stochastic Environments. IEEE Transactions on Autonomous Mental Development, 4(4):330–341, 2012.
Download
[PDF]2.4MB  
Abstract
For humans and robots, variable impedance control is an essential component for ensuring robust and safe physical interaction with the environment. Humans learn to adapt their impedance to specific tasks and environments; a capability which we continually develop and improve until we are well into our twenties. In this article, we reproduce functionally interesting aspects of learning impedance control in humans on a simulated robot platform. As demonstrated in numerous force field tasks, humans combine two strategies to adapt their impedance to perturbations, thereby minimizing position error and energy consumption: 1) if perturbations are unpredictable, subjects increase their impedance through co-contraction; 2) if perturbations are predictable, subjects learn a feed-forward command to offset the perturbation. We show how a 7-DOF simulated robot demonstrates similar behavior with our model-free reinforcement learning algorithm PI2, by applying deterministic and stochastic force fields to the robot's end-effector. We show the qualitative similarity between the robot and human movements. Our results provide a biologically plausible approach to learning appropriate impedances purely from experience, without requiring a model of either body or environment dynamics. Not requiring models also facilitates autonomous development for robots, as pre-specified models cannot be provided for each environment a robot might encounter.
BibTeX
@Article{stulp12modelfree,
  title                    = {Model-free Reinforcement Learning of Impedance Control in Stochastic Environments},
  author                   = {Stulp, F. and Buchli, B. and Ellmer, A. and Mistry, M. and Theodorou, E. and Schaal, S.},
  journal                  = {IEEE Transactions on Autonomous Mental Development},
  year                     = {2012},
  number                   = {4},
  pages                    = {330--341},
  volume                   = {4},
  abstract                 = {For humans and robots, variable impedance control is an essential component for ensuring robust and safe physical interaction with the environment. Humans learn to adapt their impedance to specific tasks and environments; a capability which we continually develop and improve until we are well into our twenties. In this article, we reproduce functionally interesting aspects of learning impedance control in humans on a simulated robot platform. As demonstrated in numerous force field tasks, humans combine two strategies to adapt their impedance to perturbations, thereby minimizing position error and energy consumption: 1)~if perturbations are unpredictable, subjects increase their impedance through co-contraction; 2)~if perturbations are predictable, subjects learn a feed-forward command to offset the perturbation. We show how a 7-DOF simulated robot demonstrates similar behavior with our model-free reinforcement learning algorithm PI2, by applying deterministic and stochastic force fields to the robot's end-effector. We show the qualitative similarity between the robot and human movements. Our results provide a biologically plausible approach to learning appropriate impedances purely from experience, without requiring a model of either body or environment dynamics. Not requiring models also facilitates autonomous development for robots, as pre-specified models cannot be provided for each environment a robot might encounter.},
  bib2html_pubtype         = {Journal},
  bib2html_rescat          = {Reinforcement Learning of Variable Impedance Control}
}

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints.


Generated by bib2html.pl (written by Patrick Riley ) on Mon Jul 20, 2015 21:50:11