BRENNA D. ARGALL


LEARNING ROBOT MOTION CONTROL


Motion control is fundamental to many robotics applications, and is known to be a difficult problem. Execution in real world environments is confounded by noisy sensors, approximate world models and action execution uncertainty. There are many contributing factors to behavior execution that are potential targets for improvement, for example improved hardware to provide more precise sensor readings or more accurate action executions. My research targets the improvement of action selection paradigms; specifically, the development of more robust control policies, or mappings from world observations to robot actions.

Policy development remains a core challenge within robotics, and typically requires high level of effort and expertise. These requirements restrict both the volume and variety of developed robot behaviors, and I believe thus also the growth of the field of robotics as a whole. From the standpoint of the robotics-expert, the effort required limits the number of developed behaviors, as much time is devoted to details such as parameter tuning. From the standpoint of those who are not robotics-experts, behavior development is entirely unaccessible. By limiting the development of robot behaviors to experts, I believe that we limit the growth of the field with respect to both volume (fewer people developing behaviors) and direction (development from the viewpoints of engineers only).

The presence of robots outside of laboratory is becoming ever the more common, from recreational robots in the home to exploration rovers in space, and the number of potential application domains for robots is unbounded. As familiarity with robots outside of the lab becomes more prevalent, it is expected that their operators will include those who are not robotics experts, thus presenting a demand for more accessible policy development techniques. Furthermore, outside of laboratory or industrial settings, typically the complexity of the operational environment increases, making policy development often intractable using classical techniques like hand-modeling world dynamics, since many of the difficulties associated with policy development scale with robot and domain complexity.

A primary goal of my research is to develop techniques for learning robot motion control policies, that reduce the requirements placed on robotics experts, in order to increase the profusion of robot behaviors and promote robot autonomy. A secondary goal is that these techniques be accessible to non-experts as well. The work of my dissertation and postdoc focused on enriching policy development with human feedback, in particular corrective feedback, that pairs with teacher demonstration and machine learning techniques to develop robot motion control policies. This approach takes inspiration from enabling humans to teach robots as they teach other humans, to better facilitate knowledge transfer from human to robot and thus also the exploitation of human knowledge and task expertise.

 back