- All Categories
- Featured Events
- Alumni
- Application Deadline
- Arts
- Campus Discourse
- Careers
- BU Central
- Center for the Humanities
- Charity & Volunteering
- Kilachand Center
- Commencement
- Conferences & Workshops
- Diversity & Inclusion
- Examinations
- Food & Beverage
- Global
- Health & Wellbeing
- Keyword Initiative
- Lectures
- LAW Community
- Meetings
- Orientation
- Other Events
- Religious Services & Activities
- Special Interest to Women
- Sports & Recreation
- Social Events
- Study Abroad
- Weeks of Welcome
- Stress Management1:00 pm
- Alumni Weekend: Black In The Entertainment Industry: Reflections On Courage, Challenge & Creativity3:30 pm
- Stopping Suicide: A Population Health Approach to Preventing Suicide—Panel 1: Understanding Suicide4:30 pm
- Gitner Family Lecture: Overcoming Dataset Bias in Artificial Intelligence5:00 pm
- Mind, Body, Spirit Yoga5:30 pm
- School of Theology Presents: Arts & the Pursuit of Justice5:30 pm
- Global Music Month 2020: BUGMF Program8:00 pm
Gitner Family Lecture: Overcoming Dataset Bias in Artificial Intelligence
The recent successes of AI can be mostly attributed to "learning algorithms" -- software that learns to classify patterns based on examples which are collected into training datasets. In this talk, as part of Alumni Weekend, Associate Professor of Computer Science Kate Saenko will focus on learning algorithms called artificial neural networks, applied to image recognition problems such as recognizing road signs for autonomous driving or classifying human activities in video. These AI models are vulnerable to the problem of "dataset bias," which happens when the algorithm's training data is not representative of future test data. A classic example of this is when AI learns to classify handwritten digits, but then fails to recognize typewritten digits. This problem happens in many situations, such as new geographic locations, different weather, learning in simulation and operating in the real world, and so on. Dataset bias is a major problem in deep learning -- even the most powerful deep neural networks fail to generalize to out-of-sample data. I will discuss some of my research into how dataset bias can be detected and mitigated to improve the robustness of AI systems.</P>
Kate Saenko is a faculty member at Boston University and a consulting professor at the MIT-IBM Watson AI Lab. She leads the Computer Vision and Learning Group at BU and is the founder and co-director of the Artificial Intelligence Research (AIR) initiative. Kate received a PhD from MIT and did her postdoctoral training at UC Berkeley and Harvard. Her research interests are in the broad area of Artificial Intelligence with a focus on dataset bias, adaptive machine learning, learning for image and language understanding, and deep learning.
Once registered, you will be emailed a link to the online program. Moderator to be announced.
The annual Gerald and Deanne Gitner Family College of Arts & Sciences Lecture is designed to highlight a current CAS faculty members, in any field, whose teaching and research addresses topics of major importance for the broad interest and benefit of the BU community. It is held in the fall, usually in conjunction with Alumni Weekend.</P>
When | 5:00 pm to 6:00 pm on Thursday, October 1, 2020 |
---|---|
Location | Virtual |
Contact Email | alumni@bu.edu |