ECE Seminar with Prof. Brian Kulis

Starts:
1:00 pm on Tuesday, March 17, 2015
Ends:
2:30 pm on Tuesday, March 17, 2015
Location:
Photonics Center, PHO 339
URL:
http://www.bu.edu/ece/files/2015/03/FlyerMed-014.jpg
Prof. Brian Kulis Ohio State University Faculty Host: Bobak Nazer Light refreshments will be available outside of PHO 339 at 12:45 pm. Small-Variance Asymptotics for Large-Scale Learning Abstract: Scalable algorithms for the rich analysis of large-scale data are applicable to an ever-increasing number of problems in academia, industry, and beyond. This talk will focus on a general method for converting rich probabilistic models for data analysis into scalable learning algorithms via the technique of small-variance asymptotics. We will take as a starting point the widely known relationship between the Gaussian mixture model and k-means for clustering data: as the covariances of the clusters shrink, the EM algorithm approaches the k-means algorithm and the negative log-likelihood approaches the k-means objective. Similar asymptotic connections exist for other machine learning models, including dimensionality reduction (probabilistic PCA becomes PCA), multiview learning (probabilistic CCA becomes CCA), and classification (a restricted Bayes optimal classifier becomes the SVM). The asymptotic non-probabilistic counterparts to the probabilistic models are almost always more scalable, and are typically easier to analyze, making them useful alternatives to the probabilistic models in many situations. We will explore how to extend such asymptotics to a richer class of probabilistic models, with a focus on large-scale graphical models, Bayesian nonparametric models, and time-series data. We will develop the necessary mathematical tools needed for these extensions and will describe a framework for designing scalable optimization problems derived from the rich probabilistic models. Applications are diverse, and include topic modeling, network evolution, and deep feature learning. Speaker Bio: Brian Kulis is an assistant professor in the department of computer science and engineering and the department of statistics at Ohio State University. His research focuses on machine learning, statistics, computer vision, data mining, and large-scale optimization. Previously, he was a postdoctoral fellow at UC Berkeley EECS and was also affiliated with the International Computer Science Institute. He obtained his PhD in computer science from the University of Texas in 2008, and his BA degree from Cornell University in computer science and mathematics in 2003. For his research, he has won three best paper awards at top-tier conferences---two at the International Conference on Machine Learning (in 2005 and 2007) and one at the IEEE Conference on Computer Vision and Pattern Recognition (in 2008). He is the recipient of an NSF CAREER award, an MCD graduate fellowship from the University of Texas (2003-2007), and an Award of Excellence from the College of Natural Sciences at the University of Texas.