In most activities of daily living, related tasks are encountered over and over again. Countless times we flip light switches, insert keys in locks, pour coffee, brush our teeth, etc. etc. To exploit this regularity, humans reuse existing motor skills for recurring tasks. For robots, using a set of motor skills also drastically reduces the search space for control, facilitates learning, leads to less reliance on accurate analytical models, and has negligible computational load during execution. Therefore, my goal is to leverage the advantages of the skill-centric approach to achieve autonomous robots that operate flexibly, robustly and safely in human environments.
Current/past work. Some of the specific research challenges in skill learning I am working/have worked on are:
- Acquiring skills through imitation.
- Training Dynamic Movement Primitives with prototypical human trajectories (HR'09)
- Improve skills through reinforcement learning. In particular:
- Learning skills in very high-dimensional action spaces (HR'10)
- Simultaneously learning planned trajectories and control parameters for variable impedance control (RSS'10,IJRR'11)
- Acquiring skills with an intrinsic robustness towards state estimation uncertainty (ICRA'11,IROS'11)
- Optimizing skills hierarchically (HR'11).
- Learn the perceptual features that are relevant to the task, skill and cost.
- In related work on face model fitting, we have demonstrated how such task-specific features can be acquired through machine learning (PAMI'08). Currently, one of my main research focuses is on applying these methods to robotics.
- Determine applicability of skills
- Combining declarative knowledge (symbolic operators) with procedural knowledge (skills)
Application domains. It is my aim to develop general methods and algorithms that apply to a wide range of robotics domains. So far, I have applied my methods to (mobile) manipulation (AR'10), humanoid robots (HR'09,HR'10), and robotic soccer (RASJ'10).
Long-term vision. To achieve robots that are flexible/robust/safe enough to operate in human environments, it will be essential to use machine learning pervasively in perception, planning and control. By using machine learning, models are based on the experience the robot gathers in the real world (rather than on a designer's model of the world). Also, I believe we should apply a developmental approach to robotics. That is, rather than having an expert accompany the robot in performing steps 1-5 above for each skill it might need, the robot continually iterates through steps 1-5 autonomously, thereby acquiring new skills of increasing power and complexity.
The key challenge here will be to determine the right amount of bias (i.e. the amount of structure that the expert designer pre-specifies) that the system starts out with. Too little bias and the system will not be able to learn anything; too much bias (e.g. complete analytical models) and the robot will not be able learn 'beyond the model' to adapt and improve robustness. The main hypothesis underlying my work is that skills provides the right amount of bias to structure perception/planning/control. On the one hand, it allows robots to improve skills to achieve known tasks efficiently/robustly/safely. On the other hand it enables robots to continually and effectively acquire new skills for novel tasks over the course of their operational lifetime (whereby the Internet could enable robots to exchange novel skills and experiences).
My ultimate goal is to achieve robots that are able to help people with disabilities to achieve prolonged autonomy and quality of life by using robot assistants as compensation systems for the Activities of Daily Living (Gerontechnology'08). There are many research challenges that will have to be addressed before achieving this vision; but I am confident we will see it realized within my own operational lifetime.