My long-term research goal is to enable real robots to manipulate any kind of object so that they can perform many different tasks in a wide variety of application scenarios, such as in our homes, hospitals, warehouses, or factories. These tasks will require fine sensorimotor skills to, for example, use tools, operate devices, assemble parts, deal with deformable objects, and so on. I would claim that equipping robots with these sensorimotor skills is one of the biggest challenges in robotics. The currently dominant approach to achieving this goal of universal sensorimotor skills is using imitation learning and collecting as much robot data as humanly possible. The promise of this approach is a foundation model for robotics. While I believe in the power of data and simple learning models, I think we need a broader vision and think beyond data collection to achieve the goal of a generalist robot. In this talk, I will discuss the need and some approaches for (1) better robot policy architectures, (2) incorporating multi-sensory data, (3) developing online and lifelong learning algorithms, and (4) agile robot hardware.
About Jeannette Bohg
I'm a Professor for Robotics at Stanford University. I'm also directing the Interactive Perception and Robot Learning Lab. In general, my research explores two questions: What are the underlying principles of robust sensorimotor coordination in humans, and how we can implement them on robots? Research on this topic has to necessarily be at the intersection of Robotics, Machine Learning and Computer Vision. In my lab, we are specifically interested Robotic Grasping and Manipulation.
Prospective students and post-docs, please see this page.
Audience: