There is a great excitement surrounding data-driven techniques for perceptual classification, inference, and motor control. These techniques come to robotic manipulation with the promise of enabling behavior with greater robustness, performance, and adaptability, as well as suggesting new representations for physical interaction. Recent excitement in the lab, however, is tempered by significant challenges faced when building practical data-driven robots. This workshop sets the focus on those challenges involved in making the data-driven approach work for robotic manipulation.

Robot manipulation is a useful "petri dish" to study data-driven systems. Hands, or end-effectors, are where the "rubber hits the road"---where robots make and break contact with the world; and where visual, tactile, and proprioceptive feedback combine. While interacting with the environment, the robot is exposed to a great deal of data-information that is challenging to collect, maintain, organize, and use. On one end, we need an already functional robotic system to start capturing data. In the process, the system is prone to degrade and/or break. On the other end, the dynamics and perceptual feedback from robotic manipulation systems yield multi-modal data that is complicated to make sense of. The goal of this workshop is to identify the challenges that are preventing data-driven robotic manipulation from experiencing the same performance jump as other fields that have embraced it, and what can we do to overcome them.

Focused Workshop Topics

  • Multi-modality in Dynamics and Perception. Physical systems that make and break contact, that are exposed to frictional forces that stick and slip, and that in general suffer from discrete reconfigurations, express multi-modal behavior. This hybridness, presented in the form of complicated statistics of traces of states and perceptual cues, is challenging to learn with the standard and well-understood smooth regression models.
  • Data Degrades and Physical Systems Break. In robotic manipulation, useful action and meaningful sensing are often intrusive and sometimes aggressive. Contact produces frictional and impact forces that over time can change or break the system, making its dynamics a \emph{moving} target for identification and learning algorithms. The value of data hence can diminish over time.
  • The High Cost of Data. There is no shortcut to the fact that to capture even a first datapoint, we need an already functional system that can engage in physical interaction in a purposeful and safe manner. Collecting a few datapoints poses the challenge of having to be selective, accurate, and incremental for them to be meaningful. Collecting a large amount of datapoints requires automation which, except in particularly constrained scenarios, might be as challenging as the original task.
  • Agreements and Disagreements between Physics and Data. General wisdom says that data and physics should work together, one providing accuracy and the other generality. When it comes to practice, it is challenging not resorting to ad-hoc heuristics to combine them and resolve their disagreements.

(Awesome) Speaker Lineup

  • Pieter Abbeel (Berkeley)
    Pieter Abbeel has developed apprenticeship learning algorithms for helicopter aerobatics, and enabled the first end-to-end system for picking up a crumpled laundry article and folding it. His current research focuses on robotics and machine learning with a particular focus on challenges in personal robotics, surgical robotics and connectomics. 

  • Siddhartha Srinivasa (UW)
    Siddhartha Srinivasa seeks to enable robots to robustly and gracefully interact with the world to perform complex tasks in uncertain, unstructured, and cluttered environments. He wants to make this interaction faster, safer, elegant, and involve simpler actuation.

  • Kris Hauser (Duke)
    Kris Hauser's research includes robot motion planning and control, semiautonomous robots, and integrating perception and planning, as well as applications to intelligent vehicles, robotic manipulation, robot-assisted medicine, and legged locomotion.

  • Jan Peters (TU-Darmstadt)
    Jan Peters' research lies at the intersection between machine learning and robotics. His research has been recognized with multiple awards, and he is very active in organizing venues for  bring members of both fields together. 

  • Aude Billard (EPFL)
    Aude Billard is interested in the control and design of robotic systems that interact with humans. She pursues three research areas: a) control systems for teaching robots through human demonstration; b) neural and cognitive processes in human imitation learning; c) user-friendly human-computer interfaces to facilitate human-robot interaction.

  • Byron Boots (GaTech)
    Byron Boots performs research in machine learning, artificial intelligence, and robotics with a focus on theory and systems that tightly integrate perception, learning, and control. He works on a range of problems including computer vision, system identification, forecasting, simultaneous localization & mapping, motion planning, and optimal control.

  • Emo Todorov (UW) - Pending travel plans.
    Emo Todorov is interested in the control of complex movements in animals and robots. He focuses on developing methods for optimal control and applying them to hard problems. A key tool in his research is his MuJoCo physics engine.


9:00 Intoduction by organizers
Topic 1: The High Cost of Data
9:20 Speaker 1
9:40 Speaker 2
10:00 Discussion co-directed by speakers 1 and 2
10:20 Highlight Talks from poster presenters
10:30 Coffe Break / Poster session
Topic 2: Data Degrades and Physical Systems Break
11:00 Speaker 3
11:20 Speaker 4
11:40 Discussion co-directed by speakers 3 and 4
12:00 Lunch break
Topic 3: Multi-modality in Dynamics and Perception
13:30 Speaker 5
13:50 Speaker 6
14:10 Speaker 7
14:30 Discussion co-directed by speakers 5, 6 and 7
15:00  HIghlight Talks from poster presenters
15:20 Coffe Break / Poster session
Topic 4: Agreements and Disagreements between Physics and Data
15:50 Speaker 8
16:10 Speaker 9
16:30 Speaker 10
16:50 Discussion co-directed by speakers 8, 9 and 10
17:20 Closing remars


Supported by