Learning to Learn More with Less

AIR Speaker Series

Yuxiong Wang, Postdoctoral Fellow, Robotics Institute, Carnegie Mellon University

When:
Monday, November 18, 2019
12:00pm-1:00pm

Where:
Hariri Institute for Computing, Seminar Room MCS 157, 111 Cummington Mall, Boston, MA


Abstract:
Understanding how humans and machines learn from a few examples remains a fundamental challenge. Humans are remarkably capable of grasping new concepts from just a few examples or learn a new skill from just a few trials. By contrast, state-of-the-art machine learning techniques typically require thousands of training examples and often break down if the training sample set is too small.

In this talk, I will discuss our efforts towards endowing visual learning systems with few-shot learning ability. Our key insight is that the visual world is well structured and highly predictable not only in feature spaces but also in the under-explored model and data spaces. Such structures and regularities enable the systems to learn how to learn new tasks rapidly by re-using previous experiences. I will focus on a few topics to demonstrate how to leverage this idea of learning to learn, or meta-learning, to address a broad range of few-shot learning tasks: meta-learning in model space and task-oriented generative modeling. I will also discuss some ongoing work towards building machines that can operate in highly dynamic and open environments, making intelligent and independent decisions based on insufficient information.


Bio:
Yuxiong Wang is a postdoctoral fellow in the Robotics Institute at Carnegie Mellon University. He received a Ph.D. in robotics in 2018 from Carnegie Mellon University. His research interests lie in the intersection of computer vision, machine learning, and robotics, with a particular focus on few-shot learning and meta-learning. He has spent time at Facebook AI Research (FAIR).