Skip to content Skip to navigation

Jeannette Bhog: Perception for robust, autonomous robotic manipulation — A question of the right trade-off between prior knowledge and learning from data

Jeannette Bohg
February 22, 2016 -
1:30pm to 2:50pm
Braun Lecture Hall (in the Mudd Chemistry Building, next door to the Braun Auditorium)

Open to the public

Given a stream of raw, multi-modal sensory input data, an autonomous robot has to continuously make decisions on how to act for achieving a specific task. This requires the robot to map a very high-dimensional space (sensory data) to another high-dimensional space (motor commands). The non-linear relationship between these can only be captured if we introduce suitable biases and task-specific prior knowledge that structures this mapping. At the same time, these biases have to provide enough flexibility to cope with the expected variability in the robot task. However, increased model flexibility comes at a price: more open parameters which either need to be manually tuned or learned from a suitable amount of data.

In this talk, I illustrate this trade-off by analyzing two problems in the area of perception for autonomous robotic grasping and manipulation: (i) learning to grasp objects given only partial and noisy sensory data and (ii) visual object tracking. I present different approaches towards each of these problems. They are located at different ends of the spectrum between the amount of prior knowledge that is incorporated in the model and the number of open parameters that are learned from data. Based on these examples, I conclude my talk by discussing the different possibilities to include biases and prior knowledge in a model and how to choose a suitable, task-specific trade-off with respect to the number of remaining open parameters.


Jeannette Bohg is a senior research scientist at the Autonomous Motion Department, Max Planck Institute for Intelligent Systems in Tübingen, Germany. She holds a Diploma in Computer Science from the Technical University Dresden, Germany and a Master's degree in Applied Information Technology from Chalmers in Gothenburg, Sweden. In 2012, she received her PhD from the Royal Institute of Technology (KTH) in Stockholm, Sweden. Her research interest lies at the intersection between robotic manipulation, Computer Vision and Machine Learning. Specifically, she analyses how continuous, multi-modal sensory feedback can be incorporated by a robot to achieve robust and dexterous manipulation capabilities in the presence of uncertainty and a dynamically changing environment.