“New Algorithms for Interpretable Machine Learning” Cynthia Rudin (Duke)
Cynthia Rudin, Duke University
Monday, December 10, 2018
Networking reception, 10:30-11:00am; Seminar 11:00-12:00pm
Kilachand Center, 610 Commonwealth Ave. Boston, MA, Colloquium Room
Abstract: With the widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed models for medical imaging, and poor bail and parole decisions in criminal justice. Explanations for black box models are not reliable and can be misleading. If we use interpretable models, they come with their own explanations, which are faithful to what the model actually computes. I will present work on (i) optimal decision lists, (ii) interpretable neural networks for computer vision, and (iii) optimal scoring systems (sparse linear models with integer coefficients). In our applications, we have always been able to achieve interpretable models with the same accuracy as black box models.
Bio: Cynthia Rudin is an Associate Professor of computer science, electrical and computer engineering, and statistics at Duke University, and directs the Prediction Analysis Lab, whose main focus is on interpretable machine learning. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. She holds an undergraduate degree from the University at Buffalo and a PhD in applied and computational mathematics from Princeton University. She is the recipient of the 2013 and 2016 INFORMS Innovative Applications in Analytics Awards, an NSF CAREER award, was named as one of the “Top 40 Under 40” by Poets and Quants in 2015 and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. Work from her lab has won 10 best paper awards in the last 5 years. She is past chair of the INFORMS Data Mining Section and is currently chair of the Statistical Learning and Data Science section of the American Statistical Association.